US20100250646A1 - Mechanism for geo distributing application data - Google Patents

Mechanism for geo distributing application data Download PDF

Info

Publication number
US20100250646A1
US20100250646A1 US12/410,552 US41055209A US2010250646A1 US 20100250646 A1 US20100250646 A1 US 20100250646A1 US 41055209 A US41055209 A US 41055209A US 2010250646 A1 US2010250646 A1 US 2010250646A1
Authority
US
United States
Prior art keywords
cluster
component
datacenter
resource
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/410,552
Inventor
John D. Dunagan
Alastair Wolman
Atul Adya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/410,552 priority Critical patent/US20100250646A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADYA, ATUL, DUNAGAN, JOHN D., WOLMAN, ALASTAIR
Publication of US20100250646A1 publication Critical patent/US20100250646A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/561Adding application-functional data or data for application control, e.g. adding metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/564Enhancement of application control based on intercepted application data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • the claimed subject matter relates to systems and methods that effectuate inter-datacenter resource interchange.
  • the systems can include devices that receive a resource request from a client component, forward the resource request to a management component that returns a cluster identity associated with a remote datacenter, the resource request and the cluster identity combined and dispatched to the remote datacenter via an inter-cluster gateway component for subsequent fulfillment by a remote server associated the remote datacenter.
  • FIG. 1 illustrates a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 2 depicts a further machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 3 provides a more detailed depiction of a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 4 provides depiction of a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 5 illustrates a system implemented on a machine that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 6 provides a further depiction of a machine implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 7 illustrates a flow diagram of a machine implemented methodology that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 8 illustrates a further flow diagram of a machine implemented methodology that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 9 illustrates a block diagram of a computer operable to execute the disclosed system in accordance with an aspect of the claimed subject matter.
  • FIG. 10 illustrates a schematic block diagram of an illustrative computing environment for processing the disclosed architecture in accordance with another aspect.
  • cluster as employed herein relates to a set of machines in a datacenter that are a manageable unit of scaling out operations against resources. Typically, a cluster can contain a few hundred machines.
  • datacenter as utilized in the following discussion relates to a collection of nodes and clusters typically co-located within the same physical environment. In general, datacenters are distinct from clusters in that communication latency between datacenters can be significantly higher.
  • a Partitioning and Recovery Service provides the mechanisms to provide full support for placing, migrating, looking up, and recovering soft-state entities, e.g., support lookups and recovery notifications across clusters while providing a unified name space for soft-state services.
  • the Partitioning and Recovery Service allows hosts to lookup a resource key and obtain the cluster or local server where that resource is being handled.
  • the Partitioning and Recovery Service's (PRS's) lookup algorithm is structured into two acts—first, locate the cluster and, second, locate the actual server in the cluster. These two mechanisms have been separated because they can have very different characteristics and requirements.
  • inter-cluster lookup can require traversing inter-datacenter (perhaps trans-oceanic) links, while intra-cluster lookup is generally confined within a local area network.
  • FIG. 1 provides a high-level overview 100 of the Partitioning and Recovery Service (PRS) design.
  • cluster 112 can include a partitioning and recovery manager (PRM) component 102 that typically can be part of every cluster.
  • Partitioning and recovery manager (PRM) component 102 can be the authority for distributing resources to owner nodes (e.g., owner nodes 108 1 , . . . , 108 W ) in the cluster and answering lookup queries for those resources.
  • cluster 112 can also include lookup nodes (e.g., lookup nodes 104 1 , . . . , 104 L ) that can be the source of resource requests to partitioning and recovery manager (PRM) component 102 .
  • PRM partitioning and recovery manager
  • each owner node 108 1 , . . . , 108 W can be an owner library (e.g., owner library 110 1 , . . . , 110 W ), and similarly, confederated with each lookup node 104 1 , . . . , 104 L can be a lookup library (e.g., lookup library 106 1 , . . . , 106 L ). Instances of owner library 110 1 , . . . , 110 W and lookup library 106 1 , . . . , 106 L can be instances of cached or pre-fetched information, but generally in all instances where there is a conflict, partitioning and recovery manager (PRM) component 102 is always the authority.
  • PRM partitioning and recovery manager
  • Partitioning and recovery manager (PRM) component 102 can also be responsible for informing lookup libraries (e.g., lookup library 106 1 , . . . , 106 L associated with respective lookup node 104 1 , . . . , 104 L ) which remote or destination partitioning and recovery manager (PRM) component to contact so that inter-cluster (e.g., between cluster 112 and one or more other geographically dispersed clusters) lookups can be possible.
  • owner nodes 108 1 , . . . , 108 W that want to host resources can typically link with the owner library (e.g., owner library 110 1 , . . .
  • nodes that want to perform lookup can link with the lookup library (e.g., lookup library 106 1 , . . . , 106 L ).
  • the lookup library e.g., lookup library 106 1 , . . . , 106 L .
  • PRM partitioning and recovery manager
  • partitioning and recovery manager (PRM) component 102 can be implemented entirely in hardware and/or a combination of hardware and/or software in execution. Further partitioning and recovery manager (PRM) component 102 , lookup nodes 104 1 , . . . , 104 L , and owner nodes 108 1 , . . . , 108 W , can be incorporated within and/or associated with other compatible components.
  • partitioning and recovery manager (PRM) component 102 lookup nodes 104 1 , . . . , 104 L , and/or owner nodes 108 1 , . . . , 108 W can be, but is not limited to, any type of machine that includes a processor and/or is capable of effective communication with a network topology. Illustrative machines upon which partitioning and recovery manager (PRM) component 102 , lookup nodes 104 1 , . . . , 104 L , and owner nodes 108 1 , . . .
  • PRM partitioning and recovery manager
  • 108 W can be effectuated can include desktop computers, server class computing devices, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances, hand-held devices, personal digital assistants, multimedia Internet mobile phones, multimedia players, and the like.
  • An illustrative network topology can include any viable communication and/or broadcast technology, for example, wired and/or wireless modalities and/or technologies can be utilized to effectuate the claimed subject matter.
  • a network topology can include utilization of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, Wide Area Networks (WANs)—both centralized and/or distributed—and/or any combination, permutation, and/or aggregation thereof.
  • the network topology can include or encompass communications or interchange utilizing Near-Field Communications (NFC) and/or communications utilizing electrical conductance through the human skin, for example.
  • NFC Near-Field Communications
  • owner libraries e.g., owner library 110 1 , . . . , 110 W
  • lookup libraries e.g., lookup library 106 1 , . . . , 106 L
  • lookup nodes 104 1 , . . . , 104 L can be, for example, persisted on volatile memory or non-volatile memory, or can include utilization of both volatile and non-volatile memory.
  • non-volatile memory can include read-only memory (ROM), programmable read only memory (PROM), electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which can act as external cache memory.
  • RAM random access memory
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Synchlink® DRAM
  • RDRAM direct RAM
  • DRAM direct Rambus® dynamic RAM
  • RDRAM Rambus® dynamic RAM
  • RDRAM Rambus® dynamic RAM
  • the owner libraries e.g., owner library 110 1 , . . . , 110 W
  • the lookup libraries e.g., lookup library 106 1 , . . . , 106 L
  • the owner libraries e.g., owner library 110 1 , . . . , 110 W
  • lookup libraries e.g., lookup library 106 1 , . . . , 106 L
  • the owner libraries e.g., owner library 110 1 , . . . , 110 W
  • lookup libraries e.g., lookup library 106 1 , . . . , 106 L
  • FIG. 2 illustrates an end-to-end use of the Partitioning and Recovery Service (PRS) 200 by a device connectivity service in a single cluster (e.g., cluster 112 ).
  • the cluster can include a plurality of client components 202 1 , . . . , 202 A that can initiate a request for one or more resources resident or extant within the cluster.
  • Client components 202 1 , . . . , 202 A via a network topology, can be in continuous and/or operative or sporadic and/or intermittent communication with load balancer component 204 that can rapidly distribute requests for resources from client components 202 1 , . . . , 202 A to multiple front end components 206 1 , . .
  • Client components 202 1 , . . . , 202 A can be implemented entirely in hardware and/or a combination of hardware and/or software in execution. Further, client components 202 1 , . . . , 202 A can be incorporated within and/or associated with other compatible components. Additionally, client components 202 1 , . . . , 202 A can be, but are not limited to, any type of machine that includes a processor and/or is capable of effective communication with a network topology. Illustrative machines that can comprise client components 202 1 , . . .
  • 202 A can include desktop computers, server class computing devices, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances, hand-held devices, personal digital assistants, multimedia Internet mobile phones, multimedia players, and the like.
  • Load balancer component 204 rapidly distributes the incoming requests from the various client components 202 1 , . . . , 202 A to ensure that no single front end component 206 1 , . . . , 206 B is disproportionately targeted with making lookup calls to the partitioning and recovery manager component 208 . Accordingly, load balancing component 204 can employ one or more load balancing techniques in order to smooth the flow and rapidly disseminate the requests from client components 202 1 , . . . , 202 A to front end component 206 1 , . . . , 206 B .
  • load balancing techniques or scheduling algorithms can include, without limitation, such techniques as round robin scheduling, deadline-monotonic priority assignment, highest response ratio next, rate-monotonic scheduling, proportional share scheduling, interval scheduling, etc.
  • the facilities and functionalities of load balancer component 204 can be performed on, but is not limited to, any type of mechanism, machine, device, facility, and/or instrument that includes a processor and/or is capable of effective and/or operative communications with network topology.
  • Mechanisms, machines, devices, facilities, and/or instruments that can comprise load balancer component 204 can include Tablet PC's, server class computing machines and/or databases, laptop computers, notebook computers, desktop computers, cell phones, smart phones, consumer appliances and/or instrumentation, industrial devices and/or components, hand-held devices, personal digital assistants, multimedia Internet enabled phones, multimedia players, and the like.
  • Front end components 206 1 , . . . , 206 B can link the lookup libraries associated with each of the front end components 206 1 , . . . , 206 B and make lookup calls to the partitioning and recovery manager component 208 .
  • Front end components 206 1 , . . . , 206 B like client components 202 1 , . . . , 202 A and load balancer component 204 , can be implemented entirely in hardware and/or as a combination of hardware and/or software in execution. Further, front end components 206 1 , . . .
  • 206 B can be, but are not limited to, any type of engine, machine, instrument of conversion, or mode of production that includes a processor and/or is capable of effective and/or operative communications with network topology.
  • Illustrative instruments of conversion, modes of production, engines, mechanisms, devices, and/or machinery that can comprise and/or embody front end components 206 1 , . . . , 206 B can include desktop computers, server class computing devices and/or databases, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances and/or processes, hand-held devices, personal digital assistants, multimedia Internet enabled mobile phones, multimedia players, and the like.
  • Partitioning and recovery manager component 208 can be the authority for distributing resources to server components 210 1 , . . . , 210 C and answering lookup queries for those resources. Additionally, partitioning and recovery manager component 208 can be responsible for informing lookup libraries associated with respective front end components 206 1 , . . . , 206 B which remote or destination partitioning and recovery manager (PRM) component in a geographically dispersed cluster to contact so that inter-cluster lookups can be effectuated.
  • PRM partitioning and recovery manager
  • Server components 210 1 , . . . , 210 C can store resources, such as presence documents, that can on request be supplied to fulfill resource requests emanating from one or more client components 202 1 , . . . , 202 A .
  • Server components 210 1 , . . . , 210 C like client components 202 1 , . . . , 202 A , load balancer component 204 , front end components 206 1 , . . .
  • 206 B can be can be any type of mechanism, machine, device, facility, and/or instrument such as embedded auto personal computers (AutoPCs), appropriately instrumented hand-held personal computers, Tablet PC's, laptop computers, notebook computers, cell phones, smart phones, portable consumer appliances and/or instrumentation, mobile industrial devices and/or components, hand-held devices, personal digital assistants, multimedia Internet enabled phones, multimedia players, server class computing environments, and the like.
  • AutoPCs embedded auto personal computers
  • Tablet PC's laptop computers
  • notebook computers notebook computers
  • cell phones smart phones
  • portable consumer appliances and/or instrumentation mobile industrial devices and/or components
  • hand-held devices hand-held devices
  • personal digital assistants multimedia Internet enabled phones, multimedia players, server class computing environments, and the like.
  • a server component e.g., one or more of server components 210 1 , . . . , 210 C
  • the lookup libraries associated with front end components 206 1 , . . . , 206 B can issue notifications to calling code (e.g., resource requests emanating from one or more client components 202 1 , . . .
  • partitioning and recovery manager component 208 provides two guarantees: (i) at-most one owner guarantee: there is at most one owner node (e.g., server component 210 ) that owns or controls a particular resource at any given point in time; and (ii) recovery notifications guarantee: if an owner node (e.g., server component 210 ) crashes or loses resources (or part thereof), the lookup libraries associated with front end components 206 1 , . . . , 206 B , will issue recovery notifications in a timely manner.
  • PRS Partitioning and Recovery Service
  • subscripts A, B, C utilized in relation to the description of client components 202 1 , . . . , 202 A , front end components 206 1 , . . . , 206 B , and server components 210 1 , . . . , 210 C denote integers greater than zero, and are employed, for the most part, to connote a respective plurality of the aforementioned components.
  • lookup libraries associated with front end components in a first cluster can be associated with a datacenter located in Salinas, Calif.
  • lookup libraries affiliated with front end components in a second cluster can be affiliated with a datacenter located in Ulan Bator, Mongolia.
  • a further aim of the claimed subject matter is to also effectively associate the facilities and functionalities included in owner libraries associated with multiple server components that comprise a first cluster associated with a datacenter in a first geographical location with the functionalities and facilities included in owner libraries associated with multiple server components dispersed to form a second cluster associated with a datacenter situated in a second geographical location, where the first and second geographical locations are separated by distance and geography.
  • owner libraries associated with multiple server components included in a first cluster and associated with a datacenter in a first geographical location can be situated in Vancouver, British Columbia
  • owner libraries associated with multiple server components included in a second cluster and affiliated with a datacenter in a second geographical location can be located in Utica, N.Y.
  • the multiple server components and multiple front end components included in a cluster can also be geographically dispersed.
  • the aggregation of clusters to form datacenters can also include multiple clusters that in of themselves are situationally dispersed.
  • a first set of server and front end components can be located in East Cleveland, Ohio
  • a second set of server and front end components can be located in Xenia, Ohio
  • a third set of server and front end components can be located in Macon, Ga.
  • the first, second, and/or third set of server and front end components can be aggregated to form a first cluster.
  • server and front end components located in Troy, N.Y., Chicopee, Mass., and Blue Bell, Pa. respectively can form a second cluster.
  • Such multiple clusters of geographically dispersed sets of server and front end components can be agglomerated to comprise a datacenter.
  • the problem overcome by the claimed subject matter therefore, relates to the fact that a given front end and its associated lookup libraries can now be in one datacenter situated in Manaus, Brazil, for example, and it can need to communicate with a server component and its associated owner libraries, situated in Beijing, China to fulfill a resource request. Accordingly, the lookup libraries associated with the front end component situated in the datacenter in Manaus, Brazil needs to be informed that the server it wishes to communicate with is located in a datacenter in Beijing, China, for instance.
  • the front end can determine how it should establish such a communications link.
  • front end component and its associated lookup libraries can handle the fact that a requested resource is being controlled or is owned by a server component situated in a geographically disparate location.
  • lookups can be resolved to the cluster level or the owner level and calling services can have a number of options.
  • the lookup library can resolve the resources address's location only to the datacenter/cluster level. It is expected that either the client component (or the calling service) will then resolve the exact machine by calling the lookup function in the destination cluster.
  • HTTP hypertext transfer protocol
  • the front end can obtain the lookup result from a library associated with the partitioning and recover manager (e.g., partitioning and recovery manager 208 ) using a lookup call and supplies the result to a locator service library.
  • the locator service can then return the domain name system (DNS) name of the cluster at which point the calling client component can be redirected to the destination cluster where a further lookup can be performed to identify the name of the machine handling or controlling the resource being requested by the calling or requesting client component.
  • DNS domain name system
  • a service-specific redirection mechanism can be employed wherein a front end component can locate the datacenter and cluster of the resource and thereafter perform a service-specific action such as, for example, translating a location-independent URL for the resource to a location-dependent URL for the resource.
  • FIG. 3 illustrates a system 300 that can be employed to effectuate resource interchange between a front end component included in a first cluster and associated with a first datacenter situated in a first geographical location and a server component included in a second cluster and associated with a second datacenter situated in a second geographical location, wherein each of the first and second geographical locations are geographically remote from one another.
  • system 300 can include cluster A 302 that is associated with a datacenter situated in a first geographic location, for example, Athens, Greece, and cluster B 304 that is associated with a data center situated in a second geographic location, for instance, Broken Hill, Australia.
  • each of cluster A 302 and cluster B 304 can be but one cluster of many clusters associated with each respective datacenter situated in the first geographic location and the second geographic location.
  • Cluster A 302 can include front end component 306 together with its associated lookup libraries and partitioning and recovery manager component 208 A
  • cluster B 304 can include server component 308 together with its affiliated owner libraries and partitioning and recovery manager component 208 B .
  • partitioning and recovery manager components 208 A and 208 B can be a component in every cluster and is typically the authority for distributing resources from front end component 306 to server component 308 . Since the general facilities and functionalities of the partitioning and recovery manager component has been set forth above, a detailed description of such attributes has been omitted for the sake of prolixity and to avoid needless repetition.
  • front end component 306 on receipt of resource requests conveyed from a load balancer component (e.g., 204 ) and emanating from one or more client components (e.g., client components 202 1 , . . . , 202 A ) can utilize its associated lookup libraries and send the resource request directly to server component 308 located in a destination datacenter situated in a geographically disparate location for fulfillment of the resource request (e.g., from owner libraries associated with server component 308 ).
  • a load balancer component e.g., 204
  • client components e.g., client components 202 1 , . . . , 202 A
  • server component 308 located in a destination datacenter situated in a geographically disparate location for fulfillment of the resource request (e.g., from owner libraries associated with server component 308 ).
  • both the server component and front end components can be configured and/or tuned for inter-cluster intra-datacenter communications (e.g., front end and server components are tuned for instantaneous or near instantaneous response times within clusters associated with a specific datacenter-communications latency minimal response time tuning)
  • the direct approach can fail where inter-datacenter communications are to be effectuated since communication latency with respect to inter-datacenter communications can be measurably significant.
  • FIG. 4 provides illustration of a system 400 that can be utilized to more effectively facilitate inter-datacenter resource interchange between front end components included in a first cluster and associated with a first datacenter situated in a first geographical location and a server component included in a second cluster and associated with a second datacenter in a second geographic location, wherein each of the first and second geographical locations are geographically remote from one another.
  • system 400 includes two clusters, cluster X 402 , associated with a first datacenter situated in a first geographic location (e.g., Mississauga, Canada), and cluster Y 404 , associated with a second datacenter situated in a second geographic location (e.g., Cancun, Mexico).
  • cluster X 402 associated with a first datacenter situated in a first geographic location (e.g., Mississauga, Canada)
  • cluster Y 404 associated with a second datacenter situated in a second geographic location (e.g., Cancun, Mexico).
  • the first and second geographical locations can be both distantly dispersed as well as geographically distant.
  • the first datacenter situated in the first geographical location can merely be a short distance from the second datacenter situated in a second geographical location.
  • the first datacenter can be located from a few meters from the second datacenter to many hundreds or thousands of kilometers from the second datacenter.
  • Cluster X 402 can include front end component 406 together with its associated lookup library 408 and partitioning and recovery manager component 208 X the respective functionalities and/or facilities of which have been expounded upon above in connection with FIGS. 1-3 , and as such a detailed description of such features have been omitted. Nevertheless, in addition to the foregoing components, cluster X 402 can also include an inter-cluster gateway component 410 X that can facilitate and/or effectuate communication with a counterpart inter-cluster gateway 410 Y situated in Cluster Y 404 located at a geographically dispersed distance from cluster X 402 .
  • Cluster Y 404 in addition to inter-cluster gateway component 410 Y also can include proxy component 412 that like front end component 406 , can include an associated lookup library. Further, cluster Y 404 can also include the proto-typical partitioning and recovery manager component 208 Y , as will have been observed by those moderately skilled in this field of endeavor, that typically can be present in all clusters set forth in the claimed subject matter. Cluster Y 404 can further include server component 414 together with its owner library where the resource being sought by a client component (e.g., 202 1 , . . . , 202 A ) can be reposited.
  • a client component e.g., 202 1 , . . . , 202 A
  • a remote resource request (e.g., the resource needed is persisted and associated with a server component located in a cluster associated with a geographically dispersed datacenter) from a client component can be received by front end component 406 situated in cluster X 402 .
  • front end component 406 is typically unaware of the fact that the resource request pertains to a remotely reposited resource and thus can consult its associated lookup library 408 .
  • Lookup library 408 since the resource request at this point has never been satisfied before will be equally unaware of where and/or how the resource request can be fulfilled, and as such can utilize the facilities and/or functionalities of partitioning and recovery manager 208 X to obtain indication that the sought after resource included in the resource request is reposited in a cluster associated with a datacenter geographically distant from the cluster in which the resource request has been received.
  • the cluster information returned from partitioning and recovery manager 208 X can then be utilized to populate the lookup library 408 with the received cluster information, after which front end component 406 can construct a message that includes the cluster information recently gleaned from partitioning and recovery manager component 208 X , together with the service or resource that is being requested from the server situated in the remote/destination cluster (e.g., cluster Y 404 ).
  • the message so constructed by front end component 406 can then be conveyed to inter-cluster gateway component 410 X for dispatch to inter-cluster gateway component 410 Y associated and situated with the remote/destination cluster (e.g., cluster Y 404 ).
  • inter-cluster gateway component 410 Y can examine the cluster information included in the message to determine that the message has both been received by the correct cluster in the correct geographically remote or destination datacenter. Having ascertained that the message has been received by both the correct cluster and the correct remote/destination datacenter, inter-cluster gateway component 410 Y can forward the message to proxy component 412 and its associated lookup libraries. It should be noted at this juncture that the operation of, and functions and/or facilities provided by proxy component 412 and its associated lookup libraries can be similar to those provided by front end component 406 and its associated lookup library 408 .
  • proxy component 412 in conjunction with its associated libraries can ascertain which server 414 within clustery 404 is capable of fulfilling the resource request received from front end component 406 located in cluster X.
  • proxy component 412 can employ its associated libraries to resolve who (e.g., which server component within cluster Y 404 ) is capable of handling or satisfying the remote resource request received from front end component 404 situated in cluster X 402 via inter-cluster gateway components 410 X and 410 Y .
  • proxy component 412 can forward the remote request to server component 414 for satisfaction of the remote request.
  • FIG. 5 provides depiction of a further system 500 that can be employed to facilitate and/or effectuate inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • system 500 includes two clusters, cluster S 502 , associated with a first datacenter situated in a first geographic location (e.g., Selma, Ala.), and cluster C 504 , associated with a second datacenter situated in a second geographic location (e.g., Copenhagen, Denmark).
  • the first and second geographical locations can be both distantly dispersed as well as geographically distant.
  • the first datacenter situated in the first geographical location can merely be a short distance from the second datacenter situated in a second geographical location.
  • the first datacenter can be located from a few meters from the second datacenter to many hundreds or thousands of kilometers from the second datacenter.
  • Cluster S 502 can include front end component 506 together with its associated lookup library 508 and partitioning and recovery manager component 208 S the respective functionalities and/or facilities of which have been expounded upon above in connection with FIGS. 1-4 , and as such a detailed description of such features have been omitted. Nevertheless, in addition to the foregoing components, cluster S 502 can also include an inter-cluster gateway component 510 S that can facilitate and/or effectuate communication with a counterpart inter-cluster gateway 510 C situated in Cluster C 504 located at a geographically dispersed distance from cluster S 502 .
  • a remote resource request (e.g., the resource needed is persisted and associated with a server component located in a cluster associated with a geographically dispersed datacenter) from a client component can be received by front end component 506 situated in cluster S 502 .
  • the front end component 506 can be aware of the server component 512 that has control or possession of the needed resource, but nevertheless can be unaware as to which cluster and/or datacenter in which server component 512 resides.
  • front end component 506 can consult its associated lookup library 508 .
  • Lookup library 508 can utilize the facilities and/or functionalities of partitioning and recovery manager 208 S to obtain indication that server component 512 that controls or handles the sought after resource included in the resource request is associated in cluster C associated with a datacenter geographically distant from the cluster in which the resource request has been received.
  • Front end component 506 can thereafter construct an message that includes the cluster information recently gleaned from partitioning and recovery manager component 208 S , together with the identity of the destination or remote server (e.g., server component 512 ) that controls or handles the service or resource that is being requested.
  • inter-cluster gateway component 510 S for dispatch to inter-cluster gateway component 510 C associated and situated with the remote/destination cluster (e.g., cluster C 504 ).
  • inter-cluster gateway component 510 C can forward the message directly to server component 512 for satisfaction of the remote request.
  • FIG. 6 provides further illustration of a system that can be utilized to effectuated and/or facilitate inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 6 depicts an architecture 700 that can be employed to enable inter-cluster interactions.
  • design issues e.g., ownership issues with respect to owners
  • lookup/recovery notification issues addressed by the architecture.
  • the root geo-resource manager (RGRM) component 602 , sub geo-resource manager (SGRM) components 604 A and 604 B , and an owner manager bridge associated with cluster resource manager (CRM) components 606 A and 606 B mostly help with the former, whereas the lookup manager forward proxies (LMFP) 610 A and 610 B and lookup manager reverse proxies (LMRP) 608 A and 608 B largely help in the latter case.
  • the SGRM component, LMFP, and the LMRP are typically all scale-out components
  • Root geo-resource manager (RGRM) component 602 is a centralized manager that scales out the sub geo-resource manager components 604 A and 604 B .
  • the sub geo-resource manager components 604 A and 604 B can hold resource assignments and then can delegate these assignments to individual local partitioning and recovery management components associated with local cluster resource manager (CRM) components 606 A and 606 B .
  • the resource assignment to different local partitioning and recovery manager components can be done in an automated manner or using an administrative interface, for example.
  • Sub geo-resource manager component 604 A and 604 B can assign resources to global owners where each such owner runs in a cluster. This owner can be co-located in cluster resource manager (CRM) component 606 A and 606 B with a local partitioning and recovery manager that assigns resources to local owners. These two components can be connected by an owner manager bridge that can receive resources from a global owner and convey them to the local partitioning and recovery manager and can also handle the corresponding recalls from the global owner as well.
  • CCM cluster resource manager
  • the motivation for dividing sub geo-resource managers 604 A and 604 B from the root geo-resource manager 602 is that the amount of state that might need to be maintained for mapping resource ranges to specific clusters can be many terabytes.
  • the lookup manager forward proxies 610 A and 610 B can handle lookup requests from local client components for remote clusters. Lookup manager forward proxies 610 A and 610 B can also handle incoming recovery notifications for local lookup nodes from remote clusters.
  • the lookup manager forward proxies 610 A and 610 B helps in connection aggregation across clusters, e.g., instead of having many lookup nodes connect to remote cluster(s), only a few lookup manager forward proxies 608 A and 608 B need to make any connections per cluster. Furthermore, these lookup manager forward proxies 608 A and 608 B can be useful in aggregating a cluster's traffic.
  • program modules can include routines, programs, objects, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined and/or distributed as desired in various aspects.
  • FIG. 7 illustrates a method to effectuate and/or facilitate inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • a resource request can be received by a front end component.
  • the front end component can consult a partitioning and recovery manager aspect to ascertain the appropriate cluster information as to where the server component capable of fulfilling the received resource request is located.
  • the partition and recovery manager aspect responds with the appropriate cluster information the lookup library associated with the front end component can be populated with the returned information.
  • the returned cluster information can be combined with the resource request and conveyed to a first inter-cluster gateway for dispatch to a second inter-cluster gateway associated with a remote cluster.
  • the returned cluster information together with the resource request can be received at the second inter-cluster gateway and thereafter conveyed to a proxy component at 712 .
  • the proxy component Once the proxy component has ascertained the server that is capable of serving or fulfilling the resource request, the request can be conveyed to the identified server for servicing or fulfillment.
  • FIG. 8 depicts a further methodology to effectuate and/or facilitate inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • a resource request can be received by a front end component.
  • the front end component can consult a partitioning and recovery manager aspect to ascertain the appropriate cluster information as to where the server component capable of fulfilling the received resource request is located.
  • the partition and recovery manager aspect responds with the appropriate cluster information the lookup library associated with the front end component can be utilized to identify the correct destination server (e.g., a server affiliated with a cluster associated with a datacenter at a remote location).
  • the returned cluster information together with the destination server information can be conveyed to a first inter-cluster gateway for dispatch to a second inter-cluster gateway associated with a remote cluster.
  • the returned cluster information together with the resource request can be received at the second inter-cluster gateway and thereafter conveyed to the server that is capable of serving or fulfilling the resource request at 812 .
  • each component of the system can be an object in a software routine or a component within an object.
  • Object oriented programming shifts the emphasis of software development away from function decomposition and towards the recognition of units of software called “objects” which encapsulate both data and functions.
  • Object Oriented Programming (OOP) objects are software entities comprising data structures and operations on data. Together, these elements enable objects to model virtually any real-world entity in terms of its characteristics, represented by its data elements, and its behavior represented by its data manipulation functions. In this way, objects can model concrete things like people and computers, and they can model abstract concepts like numbers or geometrical concepts.
  • a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer.
  • a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a server and the server can be a component.
  • One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
  • computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ).
  • magnetic storage devices e.g., hard disk, floppy disk, magnetic strips . . .
  • optical disks e.g., compact disk (CD), digital versatile disk (DVD) . . .
  • smart cards e.g., card, stick, key drive . . .
  • a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
  • LAN local area network
  • FIG. 9 there is illustrated a block diagram of a computer operable to execute the disclosed system.
  • FIG. 9 and the following discussion are intended to provide a brief, general description of a suitable computing environment 900 in which the various aspects of the claimed subject matter can be implemented. While the description above is in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that the subject matter as claimed also can be implemented in combination with other program modules and/or as a combination of hardware and software.
  • program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
  • Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and non-volatile media, removable and non-removable media.
  • Computer-readable media can comprise computer storage media and communication media.
  • Computer storage media includes both volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
  • the illustrative environment 900 for implementing various aspects includes a computer 902 , the computer 902 including a processing unit 904 , a system memory 906 and a system bus 908 .
  • the system bus 908 couples system components including, but not limited to, the system memory 906 to the processing unit 904 .
  • the processing unit 904 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 904 .
  • the system bus 908 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures.
  • the system memory 906 includes read-only memory (ROM) 910 and random access memory (RAM) 912 .
  • ROM read-only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in a non-volatile memory 910 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 902 , such as during start-up.
  • the RAM 912 can also include a high-speed RAM such as static RAM for caching data.
  • the computer 902 further includes an internal hard disk drive (HDD) 914 (e.g., EIDE, SATA), which internal hard disk drive 914 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 916 , (e.g., to read from or write to a removable diskette 918 ) and an optical disk drive 920 , (e.g., reading a CD-ROM disk 922 or, to read from or write to other high capacity optical media such as the DVD).
  • the hard disk drive 914 , magnetic disk drive 916 and optical disk drive 920 can be connected to the system bus 908 by a hard disk drive interface 924 , a magnetic disk drive interface 926 and an optical drive interface 928 , respectively.
  • the interface 924 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1094 interface technologies. Other external drive connection technologies are within contemplation of the claimed subject matter.
  • the drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth.
  • the drives and media accommodate the storage of any data in a suitable digital format.
  • computer-readable media refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the illustrative operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the disclosed and claimed subject matter.
  • a number of program modules can be stored in the drives and RAM 912 , including an operating system 930 , one or more application programs 932 , other program modules 934 and program data 936 . All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 912 . It is to be appreciated that the claimed subject matter can be implemented with various commercially available operating systems or combinations of operating systems.
  • a user can enter commands and information into the computer 902 through one or more wired/wireless input devices, e.g., a keyboard 938 and a pointing device, such as a mouse 940 .
  • Other input devices may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like.
  • These and other input devices are often connected to the processing unit 904 through an input device interface 942 that is coupled to the system bus 908 , but can be connected by other interfaces, such as a parallel port, an IEEE 1094 serial port, a game port, a USB port, an IR interface, etc.
  • a monitor 944 or other type of display device is also connected to the system bus 908 via an interface, such as a video adapter 946 .
  • a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
  • the computer 902 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 948 .
  • the remote computer(s) 948 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 902 , although, for purposes of brevity, only a memory/storage device 950 is illustrated.
  • the logical connections depicted include wired/wireless connectivity to a local area network (LAN) 952 and/or larger networks, e.g., a wide area network (WAN) 954 .
  • LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
  • the computer 902 When used in a LAN networking environment, the computer 902 is connected to the local network 952 through a wired and/or wireless communication network interface or adapter 956 .
  • the adaptor 956 may facilitate wired or wireless communication to the LAN 952 , which may also include a wireless access point disposed thereon for communicating with the wireless adaptor 956 .
  • the computer 902 can include a modem 958 , or is connected to a communications server on the WAN 954 , or has other means for establishing communications over the WAN 954 , such as by way of the Internet.
  • the modem 958 which can be internal or external and a wired or wireless device, is connected to the system bus 908 via the serial port interface 942 .
  • program modules depicted relative to the computer 902 can be stored in the remote memory/storage device 950 . It will be appreciated that the network connections shown are illustrative and other means of establishing a communications link between the computers can be used.
  • the computer 902 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
  • any wireless devices or entities operatively disposed in wireless communication e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
  • the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station.
  • Wi-Fi networks use radio technologies called IEEE 802.11x (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity.
  • IEEE 802.11x a, b, g, etc.
  • a Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet).
  • Wi-Fi networks can operate in the unlicensed 2.4 and 5 GHz radio bands.
  • IEEE 802.11 applies to generally to wireless LANs and provides 1 or 2 Mbps transmission in the 2.4 GHz band using either frequency hopping spread spectrum (FHSS) or direct sequence spread spectrum (DSSS).
  • IEEE 802.11a is an extension to IEEE 802.11 that applies to wireless LANs and provides up to 54 Mbps in the 5 GHz band.
  • IEEE 802.11a uses an orthogonal frequency division multiplexing (OFDM) encoding scheme rather than FHSS or DSSS.
  • OFDM orthogonal frequency division multiplexing
  • IEEE 802.11b (also referred to as 802.11 High Rate DSSS or Wi-Fi) is an extension to 802.11 that applies to wireless LANs and provides 11 Mbps transmission (with a fallback to 5.5, 2 and 1 Mbps) in the 2.4 GHz band.
  • IEEE 802.11g applies to wireless LANs and provides 20+Mbps in the 2.4 GHz band.
  • Products can contain more than one band (e.g., dual band), so the networks can provide real-world performance similar to the basic 10 BaseT wired Ethernet networks used in many offices.
  • the system 1000 includes one or more client(s) 1002 .
  • the client(s) 1002 can be hardware and/or software (e.g., threads, processes, computing devices).
  • the client(s) 1002 can house cookie(s) and/or associated contextual information for example.
  • the system 1000 also includes one or more server(s) 1004 .
  • the server(s) 1004 can also be hardware and/or software (e.g., threads, processes, computing devices).
  • the servers 1004 can house threads to perform transformations by employing the claimed subject matter, for example.
  • One possible communication between a client 1002 and a server 1004 can be in the form of a data packet adapted to be transmitted between two or more computer processes.
  • the data packet may include a cookie and/or associated contextual information, for example.
  • the system 1000 includes a communication framework 1006 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1002 and the server(s) 1004 .
  • a communication framework 1006 e.g., a global communication network such as the Internet
  • Communications can be facilitated via a wired (including optical fiber) and/or wireless technology.
  • the client(s) 1002 are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., cookie(s) and/or associated contextual information).
  • the server(s) 1004 are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004 .

Abstract

The claimed subject matter provides systems and methods that effectuates inter-datacenter resource interchange. The system can include devices that receive a resource request from a client component, forward the resource request to a management component that returns a cluster identity associated with a remote datacenter, the resource request and the cluster identity combined and dispatched to the remote datacenter via an inter-cluster gateway component for subsequent fulfillment by a remote server associated the remote datacenter.

Description

    BACKGROUND
  • In recent years there has been a massive push in the computer industry to build enormous datacenters. These datacenters are typically employed to deliver a class of compelling and commercially important applications, such as instant messaging, social networking, and web search. Moreover, scale-out datacenter applications are of enormous commercial interest, yet they can be frustratingly hard to build. A common pattern in building such datacenter applications is to split functionality into stateless frontend servers, soft-state middle tier servers containing complex application logic, and backend storage systems. Nevertheless, to date much prior work has been focused on scalable backend storage systems.
  • The subject matter as claimed is directed toward resolving or at the very least mitigating, one or all the problems elucidated above.
  • SUMMARY
  • The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed subject matter. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
  • The claimed subject matter relates to systems and methods that effectuate inter-datacenter resource interchange. The systems can include devices that receive a resource request from a client component, forward the resource request to a management component that returns a cluster identity associated with a remote datacenter, the resource request and the cluster identity combined and dispatched to the remote datacenter via an inter-cluster gateway component for subsequent fulfillment by a remote server associated the remote datacenter.
  • To the accomplishment of the foregoing and related ends, certain illustrative aspects of the disclosed and claimed subject matter are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles disclosed herein can be employed and is intended to include all such aspects and their equivalents. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 2 depicts a further machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 3 provides a more detailed depiction of a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter.
  • FIG. 4 provides depiction of a machine-implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 5 illustrates a system implemented on a machine that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 6 provides a further depiction of a machine implemented system that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 7 illustrates a flow diagram of a machine implemented methodology that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 8 illustrates a further flow diagram of a machine implemented methodology that effectuates and/or facilitates inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter.
  • FIG. 9 illustrates a block diagram of a computer operable to execute the disclosed system in accordance with an aspect of the claimed subject matter.
  • FIG. 10 illustrates a schematic block diagram of an illustrative computing environment for processing the disclosed architecture in accordance with another aspect.
  • DETAILED DESCRIPTION
  • The subject matter as claimed is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the claimed subject matter can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate a description thereof.
  • At the outset it should be noted, without limitation or loss of generality, that the term “cluster” as employed herein relates to a set of machines in a datacenter that are a manageable unit of scaling out operations against resources. Typically, a cluster can contain a few hundred machines. Moreover, the term “datacenter” as utilized in the following discussion relates to a collection of nodes and clusters typically co-located within the same physical environment. In general, datacenters are distinct from clusters in that communication latency between datacenters can be significantly higher.
  • As multiple geographically dispersed clusters and datacenters are added to data synchronization systems that allow files, folders, and other data to be shared and synchronized across multiple devices, services will need the ability to locate the correct datacenter and cluster for a particular resource—in that cluster, they can then ask for the particular machine that owns the resource. Furthermore, certain services can need to register for recovery notifications across clusters. In order to accommodate these requirements, the claimed subject matter, through a Partitioning and Recovery Service (PRS) provides the mechanisms to provide full support for placing, migrating, looking up, and recovering soft-state entities, e.g., support lookups and recovery notifications across clusters while providing a unified name space for soft-state services.
  • The Partitioning and Recovery Service (PRS) allows hosts to lookup a resource key and obtain the cluster or local server where that resource is being handled. In order to perform this operation, the Partitioning and Recovery Service's (PRS's) lookup algorithm is structured into two acts—first, locate the cluster and, second, locate the actual server in the cluster. These two mechanisms have been separated because they can have very different characteristics and requirements. In particular, inter-cluster lookup can require traversing inter-datacenter (perhaps trans-oceanic) links, while intra-cluster lookup is generally confined within a local area network.
  • FIG. 1 provides a high-level overview 100 of the Partitioning and Recovery Service (PRS) design. As illustrated, cluster 112 can include a partitioning and recovery manager (PRM) component 102 that typically can be part of every cluster. Partitioning and recovery manager (PRM) component 102 can be the authority for distributing resources to owner nodes (e.g., owner nodes 108 1, . . . , 108 W) in the cluster and answering lookup queries for those resources. Additionally as depicted, cluster 112 can also include lookup nodes (e.g., lookup nodes 104 1, . . . , 104 L) that can be the source of resource requests to partitioning and recovery manager (PRM) component 102. Associated with each owner node 108 1, . . . , 108 W can be an owner library (e.g., owner library 110 1, . . . , 110 W), and similarly, confederated with each lookup node 104 1, . . . , 104 L can be a lookup library (e.g., lookup library 106 1, . . . , 106 L). Instances of owner library 110 1, . . . , 110 W and lookup library 106 1, . . . , 106 L can be instances of cached or pre-fetched information, but generally in all instances where there is a conflict, partitioning and recovery manager (PRM) component 102 is always the authority.
  • Partitioning and recovery manager (PRM) component 102 can also be responsible for informing lookup libraries (e.g., lookup library 106 1, . . . , 106 L associated with respective lookup node 104 1, . . . , 104 L) which remote or destination partitioning and recovery manager (PRM) component to contact so that inter-cluster (e.g., between cluster 112 and one or more other geographically dispersed clusters) lookups can be possible. Generally, owner nodes 108 1, . . . , 108 W that want to host resources can typically link with the owner library (e.g., owner library 110 1, . . . , 110 W) whereas nodes that want to perform lookup can link with the lookup library (e.g., lookup library 106 1, . . . , 106 L). As will be appreciated by those moderately skilled in this field of endeavor, no end-service typically interacts directly with partitioning and recovery manager (PRM) component 102.
  • It should be noted, without limitation or loss of generality, that partitioning and recovery manager (PRM) component 102, lookup nodes 104 1, . . . , 104 L, and owner nodes 108 1, . . . , 108 W, can be implemented entirely in hardware and/or a combination of hardware and/or software in execution. Further partitioning and recovery manager (PRM) component 102, lookup nodes 104 1, . . . , 104 L, and owner nodes 108 1, . . . , 108 W, can be incorporated within and/or associated with other compatible components. Additionally, one or more of partitioning and recovery manager (PRM) component 102, lookup nodes 104 1, . . . , 104 L, and/or owner nodes 108 1, . . . , 108 W can be, but is not limited to, any type of machine that includes a processor and/or is capable of effective communication with a network topology. Illustrative machines upon which partitioning and recovery manager (PRM) component 102, lookup nodes 104 1, . . . , 104 L, and owner nodes 108 1, . . . , 108 W can be effectuated can include desktop computers, server class computing devices, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances, hand-held devices, personal digital assistants, multimedia Internet mobile phones, multimedia players, and the like.
  • An illustrative network topology can include any viable communication and/or broadcast technology, for example, wired and/or wireless modalities and/or technologies can be utilized to effectuate the claimed subject matter. Moreover, a network topology can include utilization of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, Wide Area Networks (WANs)—both centralized and/or distributed—and/or any combination, permutation, and/or aggregation thereof. Additionally, the network topology can include or encompass communications or interchange utilizing Near-Field Communications (NFC) and/or communications utilizing electrical conductance through the human skin, for example.
  • Further it should be noted, again without limitation or loss of generality, that owner libraries (e.g., owner library 110 1, . . . , 110 W) associated with each owner node (e.g., owner nodes 108 1, . . . , 108 W) and lookup libraries (e.g., lookup library 106 1, . . . , 106 L) affiliated with each lookup node (e.g., lookup nodes 104 1, . . . , 104 L) can be, for example, persisted on volatile memory or non-volatile memory, or can include utilization of both volatile and non-volatile memory. By way of illustration, and not limitation, non-volatile memory can include read-only memory (ROM), programmable read only memory (PROM), electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which can act as external cache memory. By way of illustration rather than limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink® DRAM (SLDRAM), Rambus® direct RAM (RDRAM), direct Rambus® dynamic RAM (DRDRAM) and Rambus® dynamic RAM (RDRAM). Accordingly, the owner libraries (e.g., owner library 110 1, . . . , 110 W) and/or the lookup libraries (e.g., lookup library 106 1, . . . , 106 L) of the subject systems and methods are intended to employ, without being limited to, these and any other suitable types of memory. In addition, it is to be appreciated that the owner libraries (e.g., owner library 110 1, . . . , 110 W) and/or lookup libraries (e.g., lookup library 106 1, . . . , 106 L) can be implemented on a server, a database, a hard drive, and the like.
  • FIG. 2 illustrates an end-to-end use of the Partitioning and Recovery Service (PRS) 200 by a device connectivity service in a single cluster (e.g., cluster 112). As depicted, the cluster can include a plurality of client components 202 1, . . . , 202 A that can initiate a request for one or more resources resident or extant within the cluster. Client components 202 1, . . . , 202 A, via a network topology, can be in continuous and/or operative or sporadic and/or intermittent communication with load balancer component 204 that can rapidly distribute requests for resources from client components 202 1, . . . , 202 A to multiple front end components 206 1, . . . , 206 B. Client components 202 1, . . . , 202 A can be implemented entirely in hardware and/or a combination of hardware and/or software in execution. Further, client components 202 1, . . . , 202 A can be incorporated within and/or associated with other compatible components. Additionally, client components 202 1, . . . , 202 A can be, but are not limited to, any type of machine that includes a processor and/or is capable of effective communication with a network topology. Illustrative machines that can comprise client components 202 1, . . . , 202 A can include desktop computers, server class computing devices, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances, hand-held devices, personal digital assistants, multimedia Internet mobile phones, multimedia players, and the like.
  • Load balancer component 204, as the name suggests, rapidly distributes the incoming requests from the various client components 202 1, . . . , 202 A to ensure that no single front end component 206 1, . . . , 206 B is disproportionately targeted with making lookup calls to the partitioning and recovery manager component 208. Accordingly, load balancing component 204 can employ one or more load balancing techniques in order to smooth the flow and rapidly disseminate the requests from client components 202 1, . . . , 202 A to front end component 206 1, . . . , 206 B. Examples of such load balancing techniques or scheduling algorithms can include, without limitation, such techniques as round robin scheduling, deadline-monotonic priority assignment, highest response ratio next, rate-monotonic scheduling, proportional share scheduling, interval scheduling, etc. The facilities and functionalities of load balancer component 204, can be performed on, but is not limited to, any type of mechanism, machine, device, facility, and/or instrument that includes a processor and/or is capable of effective and/or operative communications with network topology. Mechanisms, machines, devices, facilities, and/or instruments that can comprise load balancer component 204 can include Tablet PC's, server class computing machines and/or databases, laptop computers, notebook computers, desktop computers, cell phones, smart phones, consumer appliances and/or instrumentation, industrial devices and/or components, hand-held devices, personal digital assistants, multimedia Internet enabled phones, multimedia players, and the like.
  • Front end components 206 1, . . . , 206 B can link the lookup libraries associated with each of the front end components 206 1, . . . , 206 B and make lookup calls to the partitioning and recovery manager component 208. Front end components 206 1, . . . , 206 B, like client components 202 1, . . . , 202 A and load balancer component 204, can be implemented entirely in hardware and/or as a combination of hardware and/or software in execution. Further, front end components 206 1, . . . , 206 B, can be, but are not limited to, any type of engine, machine, instrument of conversion, or mode of production that includes a processor and/or is capable of effective and/or operative communications with network topology. Illustrative instruments of conversion, modes of production, engines, mechanisms, devices, and/or machinery that can comprise and/or embody front end components 206 1, . . . , 206 B can include desktop computers, server class computing devices and/or databases, cell phones, smart phones, laptop computers, notebook computers, Tablet PCs, consumer and/or industrial devices and/or appliances and/or processes, hand-held devices, personal digital assistants, multimedia Internet enabled mobile phones, multimedia players, and the like.
  • Partitioning and recovery manager component 208, as has been outlined in connection with partitioning and recovery manager (PRM) component 102 above, can be the authority for distributing resources to server components 210 1, . . . , 210 C and answering lookup queries for those resources. Additionally, partitioning and recovery manager component 208 can be responsible for informing lookup libraries associated with respective front end components 206 1, . . . , 206 B which remote or destination partitioning and recovery manager (PRM) component in a geographically dispersed cluster to contact so that inter-cluster lookups can be effectuated.
  • Server components 210 1, . . . , 210 C can store resources, such as presence documents, that can on request be supplied to fulfill resource requests emanating from one or more client components 202 1, . . . , 202 A. Server components 210 1, . . . , 210 C, like client components 202 1, . . . , 202 A, load balancer component 204, front end components 206 1, . . . , 206 B, and partitioning and recovery manager component 208, can be can be any type of mechanism, machine, device, facility, and/or instrument such as embedded auto personal computers (AutoPCs), appropriately instrumented hand-held personal computers, Tablet PC's, laptop computers, notebook computers, cell phones, smart phones, portable consumer appliances and/or instrumentation, mobile industrial devices and/or components, hand-held devices, personal digital assistants, multimedia Internet enabled phones, multimedia players, server class computing environments, and the like.
  • It should be recognized under the foregoing operational rubric, without limitation or loss of generality, that when and if a server component (e.g., one or more of server components 210 1, . . . , 210 C) crashes, the lookup libraries associated with front end components 206 1, . . . , 206 B can issue notifications to calling code (e.g., resource requests emanating from one or more client components 202 1, . . . , 202 A requesting resources from the disabled server component) given that the overall Partitioning and Recovery Service (PRS) as effectuated by partitioning and recovery manager component 208 provides two guarantees: (i) at-most one owner guarantee: there is at most one owner node (e.g., server component 210) that owns or controls a particular resource at any given point in time; and (ii) recovery notifications guarantee: if an owner node (e.g., server component 210) crashes or loses resources (or part thereof), the lookup libraries associated with front end components 206 1, . . . , 206 B, will issue recovery notifications in a timely manner.
  • As will be appreciated by those of moderate skill in the art the subscripts A, B, C utilized in relation to the description of client components 202 1, . . . , 202 A, front end components 206 1, . . . , 206 B, and server components 210 1, . . . , 210 C, denote integers greater than zero, and are employed, for the most part, to connote a respective plurality of the aforementioned components.
  • The goal of the claimed subject matter is to effectively conjoin the functionalities and facilities included in lookup libraries associated with front end components in a first cluster with the functionalities and facilities included in lookup libraries affiliated with front end components in a second cluster, where the first and second clusters are distantly dispersed and are associated with respective geographically disparate datacenters. For instance, lookup libraries associated with front end components in a first cluster can be associated with a datacenter located in Salinas, Calif. whereas lookup libraries affiliated with front end components in a second cluster can be affiliated with a datacenter located in Ulan Bator, Mongolia.
  • Similarly, a further aim of the claimed subject matter is to also effectively associate the facilities and functionalities included in owner libraries associated with multiple server components that comprise a first cluster associated with a datacenter in a first geographical location with the functionalities and facilities included in owner libraries associated with multiple server components dispersed to form a second cluster associated with a datacenter situated in a second geographical location, where the first and second geographical locations are separated by distance and geography. For example, owner libraries associated with multiple server components included in a first cluster and associated with a datacenter in a first geographical location can be situated in Vancouver, British Columbia, and owner libraries associated with multiple server components included in a second cluster and affiliated with a datacenter in a second geographical location can be located in Utica, N.Y.
  • It should be noted, once again without limitation or loss of generality that the multiple server components and multiple front end components included in a cluster can also be geographically dispersed. Similarly, the aggregation of clusters to form datacenters can also include multiple clusters that in of themselves are situationally dispersed. For example, a first set of server and front end components can be located in East Cleveland, Ohio, a second set of server and front end components can be located in Xenia, Ohio, and a third set of server and front end components can be located in Macon, Ga., the first, second, and/or third set of server and front end components can be aggregated to form a first cluster. Further, other sets of server and front end components located in Troy, N.Y., Chicopee, Mass., and Blue Bell, Pa. respectively can form a second cluster. Such multiple clusters of geographically dispersed sets of server and front end components can be agglomerated to comprise a datacenter.
  • In view of the foregoing, the problem overcome by the claimed subject matter therefore, relates to the fact that a given front end and its associated lookup libraries can now be in one datacenter situated in Manaus, Brazil, for example, and it can need to communicate with a server component and its associated owner libraries, situated in Beijing, China to fulfill a resource request. Accordingly, the lookup libraries associated with the front end component situated in the datacenter in Manaus, Brazil needs to be informed that the server it wishes to communicate with is located in a datacenter in Beijing, China, for instance. Once the front end is aware of the fact that it and its associated lookup libraries need to be in communication, or commence data interchange, with a server component and its associated owner libraries situated in a geographically disparate trans-oceanic datacenter located in Beijing, China, the front end can determine how it should establish such a communications link.
  • There are a few different ways in which the front end component and its associated lookup libraries can handle the fact that a requested resource is being controlled or is owned by a server component situated in a geographically disparate location. In general lookups can be resolved to the cluster level or the owner level and calling services can have a number of options.
  • In the case where lookups are resolved to the cluster level, the lookup library can resolve the resources address's location only to the datacenter/cluster level. It is expected that either the client component (or the calling service) will then resolve the exact machine by calling the lookup function in the destination cluster. There are a number of choices how different services can effectuate cluster-level resolution. First, hypertext transfer protocol (HTTP) redirection can be employed. For example, if a front end and its associated lookup library is presented with a resource address, the front end can obtain the lookup result from a library associated with the partitioning and recover manager (e.g., partitioning and recovery manager 208) using a lookup call and supplies the result to a locator service library. The locator service can then return the domain name system (DNS) name of the cluster at which point the calling client component can be redirected to the destination cluster where a further lookup can be performed to identify the name of the machine handling or controlling the resource being requested by the calling or requesting client component.
  • Further, a service-specific redirection mechanism can be employed wherein a front end component can locate the datacenter and cluster of the resource and thereafter perform a service-specific action such as, for example, translating a location-independent URL for the resource to a location-dependent URL for the resource.
  • FIG. 3 illustrates a system 300 that can be employed to effectuate resource interchange between a front end component included in a first cluster and associated with a first datacenter situated in a first geographical location and a server component included in a second cluster and associated with a second datacenter situated in a second geographical location, wherein each of the first and second geographical locations are geographically remote from one another. As depicted, system 300 can include cluster A 302 that is associated with a datacenter situated in a first geographic location, for example, Athens, Greece, and cluster B 304 that is associated with a data center situated in a second geographic location, for instance, Broken Hill, Australia. As has been elucidated above, each of cluster A 302 and cluster B 304 can be but one cluster of many clusters associated with each respective datacenter situated in the first geographic location and the second geographic location.
  • Cluster A 302 can include front end component 306 together with its associated lookup libraries and partitioning and recovery manager component 208 A, and cluster B 304 can include server component 308 together with its affiliated owner libraries and partitioning and recovery manager component 208 B. As stated above, partitioning and recovery manager components 208 A and 208 B can be a component in every cluster and is typically the authority for distributing resources from front end component 306 to server component 308. Since the general facilities and functionalities of the partitioning and recovery manager component has been set forth above, a detailed description of such attributes has been omitted for the sake of prolixity and to avoid needless repetition.
  • As illustrated in FIG. 3, front end component 306 on receipt of resource requests conveyed from a load balancer component (e.g., 204) and emanating from one or more client components (e.g., client components 202 1, . . . , 202 A) can utilize its associated lookup libraries and send the resource request directly to server component 308 located in a destination datacenter situated in a geographically disparate location for fulfillment of the resource request (e.g., from owner libraries associated with server component 308). While this approach is plausible for the most part, since both the server component and front end components can be configured and/or tuned for inter-cluster intra-datacenter communications (e.g., front end and server components are tuned for instantaneous or near instantaneous response times within clusters associated with a specific datacenter-communications latency minimal response time tuning) the direct approach can fail where inter-datacenter communications are to be effectuated since communication latency with respect to inter-datacenter communications can be measurably significant.
  • FIG. 4 provides illustration of a system 400 that can be utilized to more effectively facilitate inter-datacenter resource interchange between front end components included in a first cluster and associated with a first datacenter situated in a first geographical location and a server component included in a second cluster and associated with a second datacenter in a second geographic location, wherein each of the first and second geographical locations are geographically remote from one another. As illustrated, system 400 includes two clusters, cluster X 402, associated with a first datacenter situated in a first geographic location (e.g., Mississauga, Canada), and cluster Y 404, associated with a second datacenter situated in a second geographic location (e.g., Cancun, Mexico). As will be appreciated and observed by those of moderate skill in this field of endeavor, the first and second geographical locations can be both distantly dispersed as well as geographically distant. Thus, for example, the first datacenter situated in the first geographical location can merely be a short distance from the second datacenter situated in a second geographical location. For instance, the first datacenter can be located from a few meters from the second datacenter to many hundreds or thousands of kilometers from the second datacenter.
  • Cluster X 402 can include front end component 406 together with its associated lookup library 408 and partitioning and recovery manager component 208 X the respective functionalities and/or facilities of which have been expounded upon above in connection with FIGS. 1-3, and as such a detailed description of such features have been omitted. Nevertheless, in addition to the foregoing components, cluster X 402 can also include an inter-cluster gateway component 410 X that can facilitate and/or effectuate communication with a counterpart inter-cluster gateway 410 Y situated in Cluster Y 404 located at a geographically dispersed distance from cluster X 402.
  • Cluster Y 404 in addition to inter-cluster gateway component 410 Y also can include proxy component 412 that like front end component 406, can include an associated lookup library. Further, cluster Y 404 can also include the proto-typical partitioning and recovery manager component 208 Y, as will have been observed by those moderately skilled in this field of endeavor, that typically can be present in all clusters set forth in the claimed subject matter. Cluster Y 404 can further include server component 414 together with its owner library where the resource being sought by a client component (e.g., 202 1, . . . , 202 A) can be reposited.
  • In view of the foregoing components depicted in FIG. 4, the claimed subject matter can operate in the following manner. Initially, a remote resource request (e.g., the resource needed is persisted and associated with a server component located in a cluster associated with a geographically dispersed datacenter) from a client component can be received by front end component 406 situated in cluster X 402. On receipt of the resource request, front end component 406 is typically ignorant of the fact that the resource request pertains to a remotely reposited resource and thus can consult its associated lookup library 408. Lookup library 408, since the resource request at this point has never been satisfied before will be equally unaware of where and/or how the resource request can be fulfilled, and as such can utilize the facilities and/or functionalities of partitioning and recovery manager 208 X to obtain indication that the sought after resource included in the resource request is reposited in a cluster associated with a datacenter geographically distant from the cluster in which the resource request has been received. The cluster information returned from partitioning and recovery manager 208 X can then be utilized to populate the lookup library 408 with the received cluster information, after which front end component 406 can construct a message that includes the cluster information recently gleaned from partitioning and recovery manager component 208 X, together with the service or resource that is being requested from the server situated in the remote/destination cluster (e.g., cluster Y 404). The message so constructed by front end component 406 can then be conveyed to inter-cluster gateway component 410 X for dispatch to inter-cluster gateway component 410 Y associated and situated with the remote/destination cluster (e.g., cluster Y 404).
  • On receipt of the message from inter-cluster gateway component 410 X, inter-cluster gateway component 410 Y can examine the cluster information included in the message to determine that the message has both been received by the correct cluster in the correct geographically remote or destination datacenter. Having ascertained that the message has been received by both the correct cluster and the correct remote/destination datacenter, inter-cluster gateway component 410 Y can forward the message to proxy component 412 and its associated lookup libraries. It should be noted at this juncture that the operation of, and functions and/or facilities provided by proxy component 412 and its associated lookup libraries can be similar to those provided by front end component 406 and its associated lookup library 408.
  • Thus, when inter-cluster gateway component 410 y passes the message received from inter-cluster gateway component 410 X situated in cluster X 402 to proxy component 412, proxy component 412 in conjunction with its associated libraries can ascertain which server 414 within clustery 404 is capable of fulfilling the resource request received from front end component 406 located in cluster X. In order to identify the appropriate server component 414 capable of fulfilling the remote resource request, proxy component 412 can employ its associated libraries to resolve who (e.g., which server component within cluster Y 404) is capable of handling or satisfying the remote resource request received from front end component 404 situated in cluster X 402 via inter-cluster gateway components 410 X and 410 Y. Once proxy component 412 has ascertained or determined the server component 414 capable of fulfilling the remote resource request, proxy component 412 can forward the remote request to server component 414 for satisfaction of the remote request.
  • FIG. 5 provides depiction of a further system 500 that can be employed to facilitate and/or effectuate inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter. As illustrated, system 500 includes two clusters, cluster S 502, associated with a first datacenter situated in a first geographic location (e.g., Selma, Ala.), and cluster C 504, associated with a second datacenter situated in a second geographic location (e.g., Copenhagen, Denmark). As will be appreciated and observed by those of moderate skill in this field of endeavor, the first and second geographical locations can be both distantly dispersed as well as geographically distant. Thus, for example, the first datacenter situated in the first geographical location can merely be a short distance from the second datacenter situated in a second geographical location. For instance, the first datacenter can be located from a few meters from the second datacenter to many hundreds or thousands of kilometers from the second datacenter.
  • Cluster S 502 can include front end component 506 together with its associated lookup library 508 and partitioning and recovery manager component 208 S the respective functionalities and/or facilities of which have been expounded upon above in connection with FIGS. 1-4, and as such a detailed description of such features have been omitted. Nevertheless, in addition to the foregoing components, cluster S 502 can also include an inter-cluster gateway component 510 S that can facilitate and/or effectuate communication with a counterpart inter-cluster gateway 510 C situated in Cluster C 504 located at a geographically dispersed distance from cluster S 502.
  • In view of the foregoing components depicted in FIG. 5, the claimed subject matter can operate in the following manner. Initially, a remote resource request (e.g., the resource needed is persisted and associated with a server component located in a cluster associated with a geographically dispersed datacenter) from a client component can be received by front end component 506 situated in cluster S 502. In contrast to the situated outlined in FIG. 4, here the front end component 506 can be aware of the server component 512 that has control or possession of the needed resource, but nevertheless can be unaware as to which cluster and/or datacenter in which server component 512 resides.
  • Thus, on receipt of the resource request, front end component 506 can consult its associated lookup library 508. Lookup library 508, can utilize the facilities and/or functionalities of partitioning and recovery manager 208 S to obtain indication that server component 512 that controls or handles the sought after resource included in the resource request is associated in cluster C associated with a datacenter geographically distant from the cluster in which the resource request has been received. Front end component 506 can thereafter construct an message that includes the cluster information recently gleaned from partitioning and recovery manager component 208 S, together with the identity of the destination or remote server (e.g., server component 512) that controls or handles the service or resource that is being requested. The message so constructed by front end component 506 can then be conveyed to inter-cluster gateway component 510 S for dispatch to inter-cluster gateway component 510 C associated and situated with the remote/destination cluster (e.g., cluster C 504). On receipt of the message from inter-cluster gateway component 510 S, inter-cluster gateway component 510 C can forward the message directly to server component 512 for satisfaction of the remote request.
  • FIG. 6 provides further illustration of a system that can be utilized to effectuated and/or facilitate inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter. In particular, FIG. 6 depicts an architecture 700 that can be employed to enable inter-cluster interactions. There are two sets of design issues—ownership issues (e.g., resource assignments with respect to owners) and lookup/recovery notification issues—addressed by the architecture. The root geo-resource manager (RGRM) component 602, sub geo-resource manager (SGRM) components 604 A and 604 B, and an owner manager bridge associated with cluster resource manager (CRM) components 606 A and 606 B mostly help with the former, whereas the lookup manager forward proxies (LMFP) 610 A and 610 B and lookup manager reverse proxies (LMRP) 608 A and 608 B largely help in the latter case. The SGRM component, LMFP, and the LMRP are typically all scale-out components
  • Root geo-resource manager (RGRM) component 602 is a centralized manager that scales out the sub geo-resource manager components 604 A and 604 B. The sub geo-resource manager components 604 A and 604 B can hold resource assignments and then can delegate these assignments to individual local partitioning and recovery management components associated with local cluster resource manager (CRM) components 606 A and 606 B. The resource assignment to different local partitioning and recovery manager components can be done in an automated manner or using an administrative interface, for example.
  • Sub geo-resource manager component 604 A and 604 B can assign resources to global owners where each such owner runs in a cluster. This owner can be co-located in cluster resource manager (CRM) component 606 A and 606 B with a local partitioning and recovery manager that assigns resources to local owners. These two components can be connected by an owner manager bridge that can receive resources from a global owner and convey them to the local partitioning and recovery manager and can also handle the corresponding recalls from the global owner as well.
  • The motivation for dividing sub geo-resource managers 604 A and 604 B from the root geo-resource manager 602 is that the amount of state that might need to be maintained for mapping resource ranges to specific clusters can be many terabytes.
  • The lookup manager forward proxies 610 A and 610 B can handle lookup requests from local client components for remote clusters. Lookup manager forward proxies 610 A and 610 B can also handle incoming recovery notifications for local lookup nodes from remote clusters. The lookup manager forward proxies 610 A and 610 B helps in connection aggregation across clusters, e.g., instead of having many lookup nodes connect to remote cluster(s), only a few lookup manager forward proxies 608 A and 608 B need to make any connections per cluster. Furthermore, these lookup manager forward proxies 608 A and 608 B can be useful in aggregating a cluster's traffic.
  • One subtlety to note here is that when a server component (e.g., server component 512) crashes, the recovery notification comes from the cluster resource manager (CRM) components 606 A and 606 B, not from the sub geo-resource manager components 604 A and 604 B.
  • In view of the illustrative systems shown and described supra, methodologies that may be implemented in accordance with the disclosed subject matter will be better appreciated with reference to the flow charts of FIGS. 7-8. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies described hereinafter. Additionally, it should be further appreciated that the methodologies disclosed hereinafter and throughout this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers.
  • The claimed subject matter can be described in the general context of computer-executable instructions, such as program modules, executed by one or more components. Generally, program modules can include routines, programs, objects, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined and/or distributed as desired in various aspects.
  • FIG. 7 illustrates a method to effectuate and/or facilitate inter-datacenter resource interchange in accordance with an aspect of the claimed subject matter. At 702 a resource request can be received by a front end component. At 704 the front end component can consult a partitioning and recovery manager aspect to ascertain the appropriate cluster information as to where the server component capable of fulfilling the received resource request is located. At 706 when the partition and recovery manager aspect responds with the appropriate cluster information the lookup library associated with the front end component can be populated with the returned information. At 708 the returned cluster information can be combined with the resource request and conveyed to a first inter-cluster gateway for dispatch to a second inter-cluster gateway associated with a remote cluster. At 710 the returned cluster information together with the resource request can be received at the second inter-cluster gateway and thereafter conveyed to a proxy component at 712. At 714, once the proxy component has ascertained the server that is capable of serving or fulfilling the resource request, the request can be conveyed to the identified server for servicing or fulfillment.
  • FIG. 8 depicts a further methodology to effectuate and/or facilitate inter-datacenter resource interchange in accordance with a further aspect of the claimed subject matter. At 802 a resource request can be received by a front end component. At 804 the front end component can consult a partitioning and recovery manager aspect to ascertain the appropriate cluster information as to where the server component capable of fulfilling the received resource request is located. At 806 when the partition and recovery manager aspect responds with the appropriate cluster information the lookup library associated with the front end component can be utilized to identify the correct destination server (e.g., a server affiliated with a cluster associated with a datacenter at a remote location). At 808 the returned cluster information together with the destination server information can be conveyed to a first inter-cluster gateway for dispatch to a second inter-cluster gateway associated with a remote cluster. At 810 the returned cluster information together with the resource request can be received at the second inter-cluster gateway and thereafter conveyed to the server that is capable of serving or fulfilling the resource request at 812.
  • The claimed subject matter can be implemented via object oriented programming techniques. For example, each component of the system can be an object in a software routine or a component within an object. Object oriented programming shifts the emphasis of software development away from function decomposition and towards the recognition of units of software called “objects” which encapsulate both data and functions. Object Oriented Programming (OOP) objects are software entities comprising data structures and operations on data. Together, these elements enable objects to model virtually any real-world entity in terms of its characteristics, represented by its data elements, and its behavior represented by its data manipulation functions. In this way, objects can model concrete things like people and computers, and they can model abstract concepts like numbers or geometrical concepts.
  • As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
  • Furthermore, all or portions of the claimed subject matter may be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
  • Some portions of the detailed description have been presented in terms of algorithms and/or symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and/or representations are the means employed by those cognizant in the art to most effectively convey the substance of their work to others equally skilled. An algorithm is here, generally, conceived to be a self-consistent sequence of acts leading to a desired result. The acts are those requiring physical manipulations of physical quantities. Typically, though not necessarily, these quantities take the form of electrical and/or magnetic signals capable of being stored, transferred, combined, compared, and/or otherwise manipulated.
  • It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the foregoing discussion, it is appreciated that throughout the disclosed subject matter, discussions utilizing terms such as processing, computing, calculating, determining, and/or displaying, and the like, refer to the action and processes of computer systems, and/or similar consumer and/or industrial electronic devices and/or machines, that manipulate and/or transform data represented as physical (electrical and/or electronic) quantities within the computer's and/or machine's registers and memories into other data similarly represented as physical quantities within the machine and/or computer system memories or registers or other such information storage, transmission and/or display devices.
  • Referring now to FIG. 9, there is illustrated a block diagram of a computer operable to execute the disclosed system. In order to provide additional context for various aspects thereof, FIG. 9 and the following discussion are intended to provide a brief, general description of a suitable computing environment 900 in which the various aspects of the claimed subject matter can be implemented. While the description above is in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that the subject matter as claimed also can be implemented in combination with other program modules and/or as a combination of hardware and software.
  • Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
  • The illustrated aspects of the claimed subject matter may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
  • A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and non-volatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes both volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
  • With reference again to FIG. 9, the illustrative environment 900 for implementing various aspects includes a computer 902, the computer 902 including a processing unit 904, a system memory 906 and a system bus 908. The system bus 908 couples system components including, but not limited to, the system memory 906 to the processing unit 904. The processing unit 904 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 904.
  • The system bus 908 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory 906 includes read-only memory (ROM) 910 and random access memory (RAM) 912. A basic input/output system (BIOS) is stored in a non-volatile memory 910 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 902, such as during start-up. The RAM 912 can also include a high-speed RAM such as static RAM for caching data.
  • The computer 902 further includes an internal hard disk drive (HDD) 914 (e.g., EIDE, SATA), which internal hard disk drive 914 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 916, (e.g., to read from or write to a removable diskette 918) and an optical disk drive 920, (e.g., reading a CD-ROM disk 922 or, to read from or write to other high capacity optical media such as the DVD). The hard disk drive 914, magnetic disk drive 916 and optical disk drive 920 can be connected to the system bus 908 by a hard disk drive interface 924, a magnetic disk drive interface 926 and an optical drive interface 928, respectively. The interface 924 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1094 interface technologies. Other external drive connection technologies are within contemplation of the claimed subject matter.
  • The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer 902, the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the illustrative operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the disclosed and claimed subject matter.
  • A number of program modules can be stored in the drives and RAM 912, including an operating system 930, one or more application programs 932, other program modules 934 and program data 936. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 912. It is to be appreciated that the claimed subject matter can be implemented with various commercially available operating systems or combinations of operating systems.
  • A user can enter commands and information into the computer 902 through one or more wired/wireless input devices, e.g., a keyboard 938 and a pointing device, such as a mouse 940. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to the processing unit 904 through an input device interface 942 that is coupled to the system bus 908, but can be connected by other interfaces, such as a parallel port, an IEEE 1094 serial port, a game port, a USB port, an IR interface, etc.
  • A monitor 944 or other type of display device is also connected to the system bus 908 via an interface, such as a video adapter 946. In addition to the monitor 944, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
  • The computer 902 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 948. The remote computer(s) 948 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 902, although, for purposes of brevity, only a memory/storage device 950 is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN) 952 and/or larger networks, e.g., a wide area network (WAN) 954. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
  • When used in a LAN networking environment, the computer 902 is connected to the local network 952 through a wired and/or wireless communication network interface or adapter 956. The adaptor 956 may facilitate wired or wireless communication to the LAN 952, which may also include a wireless access point disposed thereon for communicating with the wireless adaptor 956.
  • When used in a WAN networking environment, the computer 902 can include a modem 958, or is connected to a communications server on the WAN 954, or has other means for establishing communications over the WAN 954, such as by way of the Internet. The modem 958, which can be internal or external and a wired or wireless device, is connected to the system bus 908 via the serial port interface 942. In a networked environment, program modules depicted relative to the computer 902, or portions thereof, can be stored in the remote memory/storage device 950. It will be appreciated that the network connections shown are illustrative and other means of establishing a communications link between the computers can be used.
  • The computer 902 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
  • Wi-Fi, or Wireless Fidelity, allows connection to the Internet from a couch at home, a bed in a hotel room, or a conference room at work, without wires. Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE 802.11x (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet).
  • Wi-Fi networks can operate in the unlicensed 2.4 and 5 GHz radio bands. IEEE 802.11 applies to generally to wireless LANs and provides 1 or 2 Mbps transmission in the 2.4 GHz band using either frequency hopping spread spectrum (FHSS) or direct sequence spread spectrum (DSSS). IEEE 802.11a is an extension to IEEE 802.11 that applies to wireless LANs and provides up to 54 Mbps in the 5 GHz band. IEEE 802.11a uses an orthogonal frequency division multiplexing (OFDM) encoding scheme rather than FHSS or DSSS. IEEE 802.11b (also referred to as 802.11 High Rate DSSS or Wi-Fi) is an extension to 802.11 that applies to wireless LANs and provides 11 Mbps transmission (with a fallback to 5.5, 2 and 1 Mbps) in the 2.4 GHz band. IEEE 802.11g applies to wireless LANs and provides 20+Mbps in the 2.4 GHz band. Products can contain more than one band (e.g., dual band), so the networks can provide real-world performance similar to the basic 10 BaseT wired Ethernet networks used in many offices.
  • Referring now to FIG. 10, there is illustrated a schematic block diagram of an illustrative computing environment 1000 for processing the disclosed architecture in accordance with another aspect. The system 1000 includes one or more client(s) 1002. The client(s) 1002 can be hardware and/or software (e.g., threads, processes, computing devices). The client(s) 1002 can house cookie(s) and/or associated contextual information for example.
  • The system 1000 also includes one or more server(s) 1004. The server(s) 1004 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1004 can house threads to perform transformations by employing the claimed subject matter, for example. One possible communication between a client 1002 and a server 1004 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The data packet may include a cookie and/or associated contextual information, for example. The system 1000 includes a communication framework 1006 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1002 and the server(s) 1004.
  • Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1002 are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., cookie(s) and/or associated contextual information). Similarly, the server(s) 1004 are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004.
  • What has been described above includes examples of the disclosed and claimed subject matter. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.

Claims (20)

1. A machine-implemented system that effectuates or facilitates inter-datacenter resource interchange, comprising the following computer executable components:
a frontend component that receives a resource request from a client component, the frontend component associating the resource request with a cluster identity associated with a remote datacenter based on a request to a management component, the resource request dispatched to the remote datacenter via an inter-cluster gateway component.
2. The system of claim 1, the inter-cluster gateway component consults a proxy component to determine a server component capable of servicing the resource request from the client component, the server component associated with the remote datacenter.
3. The system of claim 1, the frontend component, the client component, or the management component form a cluster associated with a first datacenter.
4. The system of claim 3, the remote datacenter and the first datacenter separated by geography.
5. The system of claim 3, the cluster includes the management component, the cluster controlled by a sub geo-resource manager, the sub geo-resource manager subservient to a root geo-resource manager.
6. The system of claim 1, the remote datacenter includes at least one cluster, the server included in the at least one cluster, the at least one cluster controlled by a sub geo-resource manager, the sub geo-resource manager subservient to a root geo-resource management.
7. The system of claim 1, the frontend component associated with a lookup library.
8. A method for effectuating inter-datacenter resource interchange, comprising:
receiving a resource request;
consulting a partitioning and recovery manager to identify a cluster and server in which the resource request resides; and
sending the resource request to a remote datacenter via an inter-cluster gateway associated with a datacenter.
9. The method of claim 8, further comprising using an inter-cluster gateway at the first cluster or the remote cluster.
10. The method of claim 9, further comprising directing the message from the inter-cluster gateway associated with the cluster directly to the server on which the resource is held.
11. The method of claim 8, the resource request received from a frontend component associated with a local cluster associated with the datacenter.
12. The method of claim 11, the datacenter and the remote datacenter connected via a trans-oceanic link.
13. The method of claim 12, a relative communications latency of communications between components included in the local cluster less than the relative communications latency of communications between the remote datacenter and the datacenter.
14. A system that effectuates or facilitates inter-datacenter resource interchange, comprising:
a processor configured for receiving a resource request from a client component, consulting a management component that returns an identity associated with a remote datacenter, and dispatching the resource request to the remote datacenter using the identity; and
a memory coupled to the processor for holding data.
15. The system of claim 14, the processor further configured for consulting a proxy component to determine a server component capable of servicing the resource request from the client component, the server component associated with the remote datacenter.
16. The system of claim 14, the client component, or the management component form a cluster associated with a first datacenter.
17. The system of claim 16, the cluster controlled by a sub geo-resource manager, the sub geo-resource manager controlled by a root geo-resource management.
18. The system of claim 16, the remote datacenter and the first datacenter separated by a geographical boundary.
19. The system of claim 14, the remote datacenter includes at least one cluster, a server included in the at least one cluster, the at least one cluster controlled by a sub geo-resource manager, the sub geo-resource manager subservient to a root geo-resource management.
20. The system of claim 19, the server associated with an owner library that has control over the resource requested by the resource request.
US12/410,552 2009-03-25 2009-03-25 Mechanism for geo distributing application data Abandoned US20100250646A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/410,552 US20100250646A1 (en) 2009-03-25 2009-03-25 Mechanism for geo distributing application data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/410,552 US20100250646A1 (en) 2009-03-25 2009-03-25 Mechanism for geo distributing application data

Publications (1)

Publication Number Publication Date
US20100250646A1 true US20100250646A1 (en) 2010-09-30

Family

ID=42785575

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/410,552 Abandoned US20100250646A1 (en) 2009-03-25 2009-03-25 Mechanism for geo distributing application data

Country Status (1)

Country Link
US (1) US20100250646A1 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100229026A1 (en) * 2007-04-25 2010-09-09 Alibaba Group Holding Limited Method and Apparatus for Cluster Data Processing
US8180851B1 (en) 2011-08-04 2012-05-15 Google Inc. Management of pre-fetched mapping data incorporating user-specified locations
US8204966B1 (en) 2011-09-26 2012-06-19 Google Inc. Map tile data pre-fetching based on user activity analysis
US8280414B1 (en) 2011-09-26 2012-10-02 Google Inc. Map tile data pre-fetching based on mobile device generated event analysis
US8711181B1 (en) 2011-11-16 2014-04-29 Google Inc. Pre-fetching map data using variable map tile radius
US8803920B2 (en) 2011-12-12 2014-08-12 Google Inc. Pre-fetching map tile data along a route
US8849942B1 (en) 2012-07-31 2014-09-30 Google Inc. Application programming interface for prefetching map data
US8886715B1 (en) 2011-11-16 2014-11-11 Google Inc. Dynamically determining a tile budget when pre-fetching data in a client device
US9063951B1 (en) 2011-11-16 2015-06-23 Google Inc. Pre-fetching map data based on a tile budget
US9197713B2 (en) 2011-12-09 2015-11-24 Google Inc. Method and apparatus for pre-fetching remote resources for subsequent display on a mobile computing device
US9275374B1 (en) 2011-11-15 2016-03-01 Google Inc. Method and apparatus for pre-fetching place page data based upon analysis of user activities
US9305107B2 (en) 2011-12-08 2016-04-05 Google Inc. Method and apparatus for pre-fetching place page data for subsequent display on a mobile computing device
US20160105390A1 (en) * 2014-10-10 2016-04-14 Microsoft Corporation Distributed components in computing clusters
US9332387B2 (en) 2012-05-02 2016-05-03 Google Inc. Prefetching and caching map data based on mobile network coverage
US9389088B2 (en) 2011-12-12 2016-07-12 Google Inc. Method of pre-fetching map data for rendering and offline routing
CN108011915A (en) * 2017-07-05 2018-05-08 国网浙江省电力公司 A kind of collection front-end system based on cloud communication
US10080215B2 (en) * 2016-08-05 2018-09-18 Nokia Technologies Oy Transportation of user plane data across a split fronthaul interface
US10231254B2 (en) * 2016-08-05 2019-03-12 Nokia Technologies Oy 5G cloud RAN method for symbol by symbol bit streaming
US10496926B2 (en) 2015-04-06 2019-12-03 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10505863B1 (en) 2015-04-06 2019-12-10 EMC IP Holding Company LLC Multi-framework distributed computation
US10511659B1 (en) * 2015-04-06 2019-12-17 EMC IP Holding Company LLC Global benchmarking and statistical analysis at scale
US10509684B2 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Blockchain integration for scalable distributed computations
US10515097B2 (en) 2015-04-06 2019-12-24 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10528875B1 (en) 2015-04-06 2020-01-07 EMC IP Holding Company LLC Methods and apparatus implementing data model for disease monitoring, characterization and investigation
US10541936B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Method and system for distributed analysis
US10541938B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Integration of distributed data processing platform with one or more distinct supporting platforms
US10656861B1 (en) 2015-12-29 2020-05-19 EMC IP Holding Company LLC Scalable distributed in-memory computation
US10706970B1 (en) 2015-04-06 2020-07-07 EMC IP Holding Company LLC Distributed data analytics
US10776404B2 (en) 2015-04-06 2020-09-15 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct computational frameworks
US10791063B1 (en) 2015-04-06 2020-09-29 EMC IP Holding Company LLC Scalable edge computing using devices with limited resources
US10812341B1 (en) 2015-04-06 2020-10-20 EMC IP Holding Company LLC Scalable recursive computation across distributed data processing nodes
US10860622B1 (en) 2015-04-06 2020-12-08 EMC IP Holding Company LLC Scalable recursive computation for pattern identification across distributed data processing nodes
US10944688B2 (en) 2015-04-06 2021-03-09 EMC IP Holding Company LLC Distributed catalog service for data processing platform
US10986168B2 (en) 2015-04-06 2021-04-20 EMC IP Holding Company LLC Distributed catalog service for multi-cluster data processing platform
US11025445B2 (en) * 2018-06-08 2021-06-01 Fungible, Inc. Early acknowledgment for write operations
WO2023165137A1 (en) * 2022-03-02 2023-09-07 京东科技信息技术有限公司 Cross-cluster network communication system and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5826239A (en) * 1996-12-17 1998-10-20 Hewlett-Packard Company Distributed workflow resource management system and method
US20020166080A1 (en) * 1996-08-23 2002-11-07 Clement Richard Attanasio System and method for providing dynamically alterable computer clusters for message routing
US20020174227A1 (en) * 2000-03-03 2002-11-21 Hartsell Neal D. Systems and methods for prioritization in information management environments
US20030105882A1 (en) * 2001-11-30 2003-06-05 Ali Syed M. Transparent injection of intelligent proxies into existing distributed applications
US6820132B1 (en) * 1998-02-02 2004-11-16 Loral Cyberstar, Inc. Internet communication system and method with asymmetric terrestrial and satellite links
US6874106B2 (en) * 2001-08-06 2005-03-29 Fujitsu Limited Method and device for notifying server failure recovery
US7730086B1 (en) * 2002-02-11 2010-06-01 Louisiana Tech University Foundation, Inc. Data set request allocations to computers
US20100161704A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation Management of Process-to-Process Inter-Cluster Communication Requests
US7761443B2 (en) * 2003-02-14 2010-07-20 International Business Machines Corporation Implementing access control for queries to a content management system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020166080A1 (en) * 1996-08-23 2002-11-07 Clement Richard Attanasio System and method for providing dynamically alterable computer clusters for message routing
US5826239A (en) * 1996-12-17 1998-10-20 Hewlett-Packard Company Distributed workflow resource management system and method
US6820132B1 (en) * 1998-02-02 2004-11-16 Loral Cyberstar, Inc. Internet communication system and method with asymmetric terrestrial and satellite links
US20020174227A1 (en) * 2000-03-03 2002-11-21 Hartsell Neal D. Systems and methods for prioritization in information management environments
US6874106B2 (en) * 2001-08-06 2005-03-29 Fujitsu Limited Method and device for notifying server failure recovery
US20030105882A1 (en) * 2001-11-30 2003-06-05 Ali Syed M. Transparent injection of intelligent proxies into existing distributed applications
US7730086B1 (en) * 2002-02-11 2010-06-01 Louisiana Tech University Foundation, Inc. Data set request allocations to computers
US7761443B2 (en) * 2003-02-14 2010-07-20 International Business Machines Corporation Implementing access control for queries to a content management system
US20100161704A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation Management of Process-to-Process Inter-Cluster Communication Requests

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100229026A1 (en) * 2007-04-25 2010-09-09 Alibaba Group Holding Limited Method and Apparatus for Cluster Data Processing
US8769100B2 (en) * 2007-04-25 2014-07-01 Alibaba Group Holding Limited Method and apparatus for cluster data processing
US8683008B1 (en) 2011-08-04 2014-03-25 Google Inc. Management of pre-fetched mapping data incorporating user-specified locations
US8180851B1 (en) 2011-08-04 2012-05-15 Google Inc. Management of pre-fetched mapping data incorporating user-specified locations
US8972529B1 (en) 2011-08-04 2015-03-03 Google Inc. Management of pre-fetched mapping data incorporating user-specified locations
US8805959B1 (en) 2011-09-26 2014-08-12 Google Inc. Map tile data pre-fetching based on user activity analysis
US8280414B1 (en) 2011-09-26 2012-10-02 Google Inc. Map tile data pre-fetching based on mobile device generated event analysis
US8549105B1 (en) 2011-09-26 2013-10-01 Google Inc. Map tile data pre-fetching based on user activity analysis
US9245046B2 (en) 2011-09-26 2016-01-26 Google Inc. Map tile data pre-fetching based on mobile device generated event analysis
US8812031B2 (en) 2011-09-26 2014-08-19 Google Inc. Map tile data pre-fetching based on mobile device generated event analysis
US8204966B1 (en) 2011-09-26 2012-06-19 Google Inc. Map tile data pre-fetching based on user activity analysis
US9275374B1 (en) 2011-11-15 2016-03-01 Google Inc. Method and apparatus for pre-fetching place page data based upon analysis of user activities
US9307045B2 (en) 2011-11-16 2016-04-05 Google Inc. Dynamically determining a tile budget when pre-fetching data in a client device
US8886715B1 (en) 2011-11-16 2014-11-11 Google Inc. Dynamically determining a tile budget when pre-fetching data in a client device
US9063951B1 (en) 2011-11-16 2015-06-23 Google Inc. Pre-fetching map data based on a tile budget
US8711181B1 (en) 2011-11-16 2014-04-29 Google Inc. Pre-fetching map data using variable map tile radius
US9569463B1 (en) 2011-11-16 2017-02-14 Google Inc. Pre-fetching map data using variable map tile radius
US9813521B2 (en) 2011-12-08 2017-11-07 Google Inc. Method and apparatus for pre-fetching place page data for subsequent display on a mobile computing device
US9305107B2 (en) 2011-12-08 2016-04-05 Google Inc. Method and apparatus for pre-fetching place page data for subsequent display on a mobile computing device
US9197713B2 (en) 2011-12-09 2015-11-24 Google Inc. Method and apparatus for pre-fetching remote resources for subsequent display on a mobile computing device
US9491255B2 (en) 2011-12-09 2016-11-08 Google Inc. Method and apparatus for pre-fetching remote resources for subsequent display on a mobile computing device
US9111397B2 (en) 2011-12-12 2015-08-18 Google Inc. Pre-fetching map tile data along a route
US8803920B2 (en) 2011-12-12 2014-08-12 Google Inc. Pre-fetching map tile data along a route
US9389088B2 (en) 2011-12-12 2016-07-12 Google Inc. Method of pre-fetching map data for rendering and offline routing
US9563976B2 (en) 2011-12-12 2017-02-07 Google Inc. Pre-fetching map tile data along a route
US9332387B2 (en) 2012-05-02 2016-05-03 Google Inc. Prefetching and caching map data based on mobile network coverage
US8849942B1 (en) 2012-07-31 2014-09-30 Google Inc. Application programming interface for prefetching map data
US20160105390A1 (en) * 2014-10-10 2016-04-14 Microsoft Corporation Distributed components in computing clusters
US11616757B2 (en) * 2014-10-10 2023-03-28 Microsoft Technology Licensing, Llc Distributed components in computing clusters
US20210051130A1 (en) * 2014-10-10 2021-02-18 Microsoft Technology Licensing, Llc Distributed components in computing clusters
US10862856B2 (en) * 2014-10-10 2020-12-08 Microsoft Technology Licensing, Llc Distributed components in computing clusters
US10270735B2 (en) * 2014-10-10 2019-04-23 Microsoft Technology Licensing, Llc Distributed components in computing clusters
US20190288981A1 (en) * 2014-10-10 2019-09-19 Microsoft Technology Licensing, Llc Distributed components in computing clusters
US10776404B2 (en) 2015-04-06 2020-09-15 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct computational frameworks
US10860622B1 (en) 2015-04-06 2020-12-08 EMC IP Holding Company LLC Scalable recursive computation for pattern identification across distributed data processing nodes
US10511659B1 (en) * 2015-04-06 2019-12-17 EMC IP Holding Company LLC Global benchmarking and statistical analysis at scale
US10509684B2 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Blockchain integration for scalable distributed computations
US10515097B2 (en) 2015-04-06 2019-12-24 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10528875B1 (en) 2015-04-06 2020-01-07 EMC IP Holding Company LLC Methods and apparatus implementing data model for disease monitoring, characterization and investigation
US10541936B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Method and system for distributed analysis
US10541938B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Integration of distributed data processing platform with one or more distinct supporting platforms
US11854707B2 (en) 2015-04-06 2023-12-26 EMC IP Holding Company LLC Distributed data analytics
US10706970B1 (en) 2015-04-06 2020-07-07 EMC IP Holding Company LLC Distributed data analytics
US10496926B2 (en) 2015-04-06 2019-12-03 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10791063B1 (en) 2015-04-06 2020-09-29 EMC IP Holding Company LLC Scalable edge computing using devices with limited resources
US10812341B1 (en) 2015-04-06 2020-10-20 EMC IP Holding Company LLC Scalable recursive computation across distributed data processing nodes
US10505863B1 (en) 2015-04-06 2019-12-10 EMC IP Holding Company LLC Multi-framework distributed computation
US11749412B2 (en) 2015-04-06 2023-09-05 EMC IP Holding Company LLC Distributed data analytics
US10999353B2 (en) 2015-04-06 2021-05-04 EMC IP Holding Company LLC Beacon-based distributed data processing platform
US10944688B2 (en) 2015-04-06 2021-03-09 EMC IP Holding Company LLC Distributed catalog service for data processing platform
US10986168B2 (en) 2015-04-06 2021-04-20 EMC IP Holding Company LLC Distributed catalog service for multi-cluster data processing platform
US10984889B1 (en) 2015-04-06 2021-04-20 EMC IP Holding Company LLC Method and apparatus for providing global view information to a client
US10656861B1 (en) 2015-12-29 2020-05-19 EMC IP Holding Company LLC Scalable distributed in-memory computation
US10080215B2 (en) * 2016-08-05 2018-09-18 Nokia Technologies Oy Transportation of user plane data across a split fronthaul interface
US10231254B2 (en) * 2016-08-05 2019-03-12 Nokia Technologies Oy 5G cloud RAN method for symbol by symbol bit streaming
CN108011915A (en) * 2017-07-05 2018-05-08 国网浙江省电力公司 A kind of collection front-end system based on cloud communication
US11025445B2 (en) * 2018-06-08 2021-06-01 Fungible, Inc. Early acknowledgment for write operations
WO2023165137A1 (en) * 2022-03-02 2023-09-07 京东科技信息技术有限公司 Cross-cluster network communication system and method

Similar Documents

Publication Publication Date Title
US20100250646A1 (en) Mechanism for geo distributing application data
EP3798833B1 (en) Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment
CN108701076B (en) Distributed data set storage and retrieval
US8655985B2 (en) Content delivery using multiple sources over heterogeneous interfaces
CN110311983B (en) Service request processing method, device and system, electronic equipment and storage medium
CN113490918A (en) Calling external functions from a data warehouse
US11943203B2 (en) Virtual network replication using staggered encryption
US10594670B2 (en) Edge encryption with metadata
US11290433B2 (en) Message-based database replication
US20180302400A1 (en) Authenticating access to an instance
US20180060248A1 (en) End-to-end caching of secure content via trusted elements
JP2013542681A (en) Content sharing method and apparatus using group change information in content-centric network environment
US20150381716A1 (en) Method and system for sharing files over p2p
Artail et al. A framework of mobile cloudlet centers based on the use of mobile devices as cloudlets
CN110798495B (en) Method and server for end-to-end message push in cluster architecture mode
EP2812860A1 (en) Retrieving availability information from published calendars
US9313613B2 (en) Method, apparatus, and system for performing unsolicited location-based download
Serpanos et al. IoT System Architectures
KR101997602B1 (en) Resource Dependency Service Method for M2M Resource Management
CN113765983A (en) Site service deployment method and device
Alexandrov et al. Implementation of a service oriented architecture in smart sensor systems integration platform
US11841875B2 (en) Database sharing in a virtual private deployment
Kimmatkar et al. Applications sharing using binding server for distributed environment
Mubarakali et al. Optimized flexible network architecture creation against 5G communication-based IoT using information-centric wireless computing
Lim et al. Location Based One-Time Conference Protocol Without Personal Information

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUNAGAN, JOHN D.;WOLMAN, ALASTAIR;ADYA, ATUL;REEL/FRAME:022446/0734

Effective date: 20090324

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0509

Effective date: 20141014