US20070203871A1 - Method and apparatus for reward-based learning of improved systems management policies - Google Patents
Method and apparatus for reward-based learning of improved systems management policies Download PDFInfo
- Publication number
- US20070203871A1 US20070203871A1 US11/337,311 US33731106A US2007203871A1 US 20070203871 A1 US20070203871 A1 US 20070203871A1 US 33731106 A US33731106 A US 33731106A US 2007203871 A1 US2007203871 A1 US 2007203871A1
- Authority
- US
- United States
- Prior art keywords
- reward
- component
- policy
- learning
- decision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Abstract
In one embodiment, the present invention is a method for reward-based learning of improved systems management policies. One embodiment of the inventive method involves supplying a first policy and a reward mechanism. The first policy maps states of at least one component of a data processing system to selected management actions, while the reward mechanism generates numerical measures of value responsive to particular actions (e.g., management actions) performed in particular states of the component(s). The first policy and the reward mechanism are applied to the component(s), and results achieved through this application (e.g., observations of corresponding states, actions and rewards) are processed in accordance with reward-based learning to derive a second policy having improved performance relative to the first policy in at least one state of the component(s).
Description
- The present invention relates generally to data processing systems, and relates more particularly to autonomic computing (i.e., automated management of hardware and software components of data processing systems). Specifically, the present invention provides a method and apparatus for reward-based learning of improved systems management policies.
- Due to the increasing complexity of modern computing systems and of interactions of such systems over networks, there is an urgent need to enable such systems to rapidly and effectively perform self-management functions (e.g., self-configuration, self-optimization, self-healing or self-protection) responsive to rapidly changing conditions and/or circumstances. This entails the development of effective policies pertaining to, for example, dynamic allocation of computational resources, performance tuning of system control parameters, dynamic configuration management, automatic repair or remediation of system faults and actions to mitigate or avoid observed or predicted malicious attacks or cascading system failures.
- Devising such policies typically entails the development of explicit models of system behavior (e.g., based on queuing theory or control theory) and interactions with external components or processes (e.g., users submitting jobs to the system). Given such a model, an analysis is performed that predicts the consequences of various potential management actions on future system behavior and interactions and then selects the action resulting in the best predicted behavior. A common problem with such an approach is that devising the necessary models is often a knowledge- and labor-intensive, as well as time consuming, task. These drawbacks are magnified as the systems become more complex. Moreover, the models are imperfect, so the policies derived therefrom are also imperfect to some degree and can be improved.
- Thus, there is a need in the art for a method and apparatus for reward-based learning of improved systems management policies.
- In one embodiment, the present invention is a method for reward-based learning of improved systems management policies. One embodiment of the inventive method involves supplying a first policy and a reward mechanism. The first policy maps states of at least one component of a data processing system to selected management actions, while the reward mechanism generates numerical measures of value responsive to particular actions (e.g., management actions) performed in particular states of the component(s). The first policy and the reward mechanism are applied to the component(s), and results achieved through this application (e.g., observations of corresponding states, actions and rewards) are processed in accordance with reward-based learning to derive a second policy having improved performance relative to the first policy in at least one state of the component(s).
- So that the manner in which the above recited embodiments of the invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be obtained by reference to the embodiments thereof which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
-
FIG. 1 is a diagram of a networked data processing system in which the present invention may be implemented; -
FIG. 2 is a high level block diagram of a single general purpose computing device that has been advantageously adapted to implement the method of the present invention; -
FIG. 3 is a schematic illustration of one embodiment of a data center for executing the method of the present invention; -
FIG. 4 is a flow chart illustrating a method for deriving a policy for making resource allocation decisions in a computing system; and -
FIG. 5 is a schematic illustration of the basic operations and functionality of one embodiment of an application environment module according to the present invention. - To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- In one embodiment, the present invention is a method for automatically learning a policy for managing a data processing system or at least one component thereof. The method may be implemented, for example, within a data processing system such as a network, a server, or a client computer, as well as in a data processing system component such as a network router, a storage device, an operating system, a database management program or a web application software platform.
- Embodiments of the present invention employ reward-based learning methodologies, including well-known Reinforcement Learning (RL) techniques, in order to generate effective policies (i.e., deterministic or non-deterministic behavioral rules or mappings of computing system states to management actions) for management of a computing system. Within the context of the present invention, the term “reward-based learning” refers to machine learning methods that directly or indirectly learn policies based on one or more temporally related observations of an environment's current state, an action taken in the state, and an instantaneous “reward” (e.g., a scalar measure of value) obtained as a consequence of performing the given action in the given state. Further, within the context of the present invention, “Reinforcement Learning” refers to a general set of trial-and-error reward-based learning methods whereby an agent can learn to make good decisions in an environment through a sequence of interactions. Known Reinforcement Learning methods that may be implemented in accordance with the present invention include value-function learning methods (such as Temporal Difference Learning, Q-Learning or Sarsa), actor-critic methods and direct policy methods (e.g., policy gradient methods).
-
FIG. 1 is a schematic illustration of one embodiment of a networkdata processing system 100 comprising a network of computers (e.g., clients) in which the present invention may be implemented. The networkdata processing system 100 includes anetwork 102, aserver 104, astorage unit 106 and a plurality ofclients network 102 is the medium used to provide communications links between theserver 104,storage unit 106 andclients data processing system 100. Thenetwork 102 may include connections, such as wired or wireless communication links or fiber optic cables. - In the embodiment illustrated, the
server 104 provides data, such as boot files, operating system images, and applications to theclients clients clients data processing system 100 depicted inFIG. 1 comprises asingle server 104 and three clients, 108, 100, 112, those skilled in the art will recognize that the networkdata processing system 100 may include additional servers, clients, and other devices not shown inFIG. 1 . - In one embodiment, the network
data processing system 100 is the Internet, with thenetwork 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. In further embodiments, the networkdata processing system 100 is implemented as an intranet, a local area network (LAN), or a wide area network (WAN). Furthermore, althoughFIG. 1 illustrates a networkdata processing system 100 in which the method of the present invention may be implemented, those skilled in the art will realize that the present invention may be implemented in a variety of other data processing systems, including servers (e.g., server 104) and client computers (e.g.,clients FIG. 1 is intended as an example, and not as an architectural limitation for the present invention. - For example,
FIG. 2 is a high level block diagram of a single generalpurpose computing device 200 that has been advantageously adapted to implement the method of the present invention. In one embodiment, the generalpurpose computing device 200 comprises aprocessor 202, amemory 204, asystem management module 205 and various input/output (I/O)devices 206 such as a display, a keyboard, a mouse, a modem, and the like. In one embodiment, at least one I/O device is a storage device (e.g., a disk drive, an optical disk drive, a floppy disk drive). It should be understood that thesystem management module 205 can be implemented as a physical device or subsystem that is coupled to a processor through a communication channel. -
FIG. 3 is a schematic illustration of one embodiment of adata center 300 for executing the method of the present invention. Thedata center 300 comprises a plurality ofapplication environment modules more resource arbiters 304 and a plurality ofresources respective demands FIG. 1 ). Example client types include: online shopping services, online trading services, and online auction services. - In order to process client demands 313, 314 or 315, the application environments 301-303 may utilize the resources 305-309 within the
data center 300. As each application environment 301-303 is independent from the others and provides different services, each application environment 301-303 has its own set of resources 305-309 at its disposal, the use of which must be optimized to maintain the appropriate quality of service (QoS) level for the application environment's clients. An arrow from an application environment 301-303 to a resource 305-309 denotes that the resource 305-309 is currently in use by the application environment 301-303 (e.g., inFIG. 3 ,resource 305 is currently in use by application environment 301). An application environment 301-303 also makes use of data or software objects, such as respective Service Level Agreements (SLAs) 310, 311 and 312 with its clients, in order to determine its service-level utility function U(S,D). An example SLA 310-312 may specify payments to be made by the client based on mean end-to-end response time averaged over, say, a five-minute time interval. Additionally the client workload may be divided into a number of service classes (e.g., Gold, Silver and Bronze), and the SLA 310-312 may specify payments based on details of response time characteristics within each service class. - Each application environment 301-303 is in further communication with the
resource arbiter module 304. Theresource arbiter 304 is responsible for deciding, at any given time while thedata center 300 is in operation, which resources 305-309 may be used by which application environments 301-303. In one embodiment, the application environments 301-303 andresource arbiter 304 are software modules consisting of autonomic elements (e.g., software components that couple conventional computing functionality with additional self-management capabilities), for example written in Java, and communication between modules 301-303 and 304 takes place using standard Java interfaces. The modules 301-203 and 304 may run on a single computer or on different computers connected by a network such as the Internet or a Local Area Network (LAN), e.g., as depicted inFIG. 1 . In the networked case, communication may additionally employ standard network communication protocols such as TCP/IP and HTTP, and standard Web interfaces such as OGSA. -
FIG. 4 is a flow chart illustrating amethod 400 for automatically deriving a policy for making management decisions in a computing system. In one embodiment, the computing system is at least one component of a data processing system (e.g., the entire data processing system or just a portion—one or more individual components—thereof). In one embodiment, the policy is for governing the allocation of computing resources (e.g., physical computing resources, virtual computing resources, power to physical computing devices, etc.) within the computing system. Themethod 400 may be implemented, for example, in an application environment (e.g.,application environment FIG. 3 ), which interacts with a resource arbiter (e.g., resource arbiter 304) via an autonomic manager. Themethod 400 is initialized atblock 402 and proceeds to block 404, where themethod 400 obtains an initial decision-making entity and a reward mechanism (e.g., from a user of the computing system to which themethod 400 is applied). The initial decision making entity is capable of making management decisions affecting the computing system, while the reward mechanism generates numerical measures of value responsive to particular actions performed in particular states of the computing system. In one embodiment, the initial decision-making entity is an automated policy that is not related to or derived from reward-based learning. For example, the policy may be based on a set of hand-crafted behavior rules or on an explicit system performance model. In another embodiment, the initial decision making entity is a human system administrator. In further embodiments, behavior of the initial decision-making entity is not influenced by the learning (e.g., observation of the application of the initial decision-making entity to derive a new, better policy) that occurs in accordance with themethod 400, as described in further detail below. Inblock 406, themethod 400 applies the initial policy to the application environment, e.g., via the autonomic manager. - In
block 408, the method records at least one instance of observable data pertaining to the computing system running while being managed by the initial decision-making entity. In one embodiment, an observation in accordance withstep 408 is defined by a tuple that, at a given time t (where 0≦t≦T), denotes the computing system's current state, s, an action, a, taken by the initial decision-making entity in state s and a reward, r, generated by the reward mechanism responsive to action a in state s. In addition, the observable data may further include the next state to which the computing system transitioned as a result of the action a in state s. In another embodiment, the observable data additionally includes the result of an internal calculation performed by the initial decision-making entity (e.g., one or more expected-value estimates). In a further embodiment, the observed action, a, may comprise an exploratory “off-policy” action differing from the preferred action of the initial decision-making entity, taken in order to facilitate more effective reward-based learning. The observations are logged by themethod 400 as training data for use in deriving a new policy, as described in greater detail below. - In
block 410, themethod 400 applies a reward-based learning algorithm (e.g., a Reinforcement Learning algorithm) to the training data. In one embodiment, the reward-based learning algorithm incrementally learns a value function, Q(s, a), denoting the cumulative discounted or undiscounted long-range expected value when action a is taken in state s. The value function Q(s, a) induces a new policy by application of a value-maximization principle that stipulates selecting, among all admissible actions that could be taken in state s, the action with the greatest expected value. The value function Q(s, a) may be learned by a value function learning algorithm such as Temporal Distance Learning, Q-Learning or Sarsa. For example, in the Sarsa(0) algorithm, one applies to each observed state/action/reward tuple the following learning algorithm:
ΔQ(s t , a t)=α(t)[r t +γQ(s t+1 , a t+1)−Q(s t , a t)] (EQN. 1)
where st is the initial state at time t, at is the action taken at time t, rt is the immediate reward at time t for taking the action at in the initial state st, st+1 is the next state attime t+ 1, at+1 is the next action taken attime t+ 1, γ is a constant representing a “discount parameter” having a value between zero and one that expresses the present value of an expected future reward and α(t) is a “learning rate” parameter that decays to zero asymptotically to ensure convergence. - In another embodiment, the reward-based learning algorithm learns a function ✓(s) that directly maps system state s into a selected action. The function ✓(s) may be learned, for example, by a direct policy method (e.g., a policy gradient method). In a further embodiment, the reward-based learning algorithm learns a non-deterministic function ✓(s, a) that denotes the probability of selecting action a in state s. In one embodiment, the reward-based learning algorithm is applied off-line, but in other embodiments may be applied on-line.
- In another embodiment, the reward-based learning method comprises learning a state-transition model and an expected reward model, and thereupon using these models to solve for an optimal policy (e.g., by standard Dynamic Programming techniques such as Value Iteration or Policy Iteration).
- In
block 412, themethod 400 determines whether training may be stopped, or whether additional iterations applying the reward-based algorithm to the training data are necessary. In one embodiment, training is stopped if a measure of training error (e.g., Bellman error) has reached a sufficiently small value. In another embodiment, training is stopped if the measure of training error has converged to an asymptotic value, or if it is decreasing at a sufficiently slow rate. In a further embodiment, training is stopped if an upper bound on the number of training iterations has been reached. If themethod 400 concludes inblock 412 that an additional iteration is needed, themethod 400 returns to block 410 and proceeds as described above to re-apply the reward-based algorithm to the training data. - Alternatively, if the
method 400 concludes inblock 412 that an additional iteration applying the reward-based algorithm to the training data is not necessary, themethod 400 proceeds to block 414 and determines whether additional training data needs to be observed. In one embodiment, additional training data needs to be observed if a measure of training error has not yet reached a sufficiently small value. In another embodiment, an overfitting criterion pertaining to the amount of training data required for a particular nonlinear function approximator representing a learned value function or learned policy is applied. If themethod 400 concludes inblock 414 that additional training data needs to be observed, themethod 400 returns to block 408 and proceeds as described above in order to record additional observable data. - Alternatively, if the
method 400 concludes inblock 414 that additional training data does not need to be observed, themethod 400 proceeds to block 416 and extracts a new value function, Q(s, a) or, alternatively, a new policy ✓(s) or ✓(s, a) as output of the reward-based learning process. - In
block 418, themethod 400 applies the new policy or new value function extracted inblock 416 in order to make management decisions in one or more states of the computing system. In one embodiment, the new policy or new value function replaces the initial decision-making entity for all subsequent management decisions; however, in other embodiments, the initial decision-making entity is applied in at least certain specified states. The new policy is expected to be “better” than the initial decision-making entity in the sense that the long-term value of applying the new policy is at least as good as, if not better than, the long-term value of applying the initial decision-making entity in at least one state of the data processing system. Themethod 400 then returns to step 408 in order to assess application of the new policy or new value function in accordance with the steps described above. In this manner, successive iterations of themethod 400 are executed, using the newly derived policies or value functions (e.g., as extracted in block 416) in place of the initial decision-making entities (e.g., applied in block 406). - The
method 400 thereby enables the learning of high-quality management policies without an explicit performance model or traffic model, and with little or no built-in system-specific knowledge, by applying reward-based learning. Moreover, off-line training of the reward-based learning algorithm on application log data substantially avoids poor performance issues typically associated with live on-line training, while scalability is enhanced by the by the use of nonlinear function approximators (e.g., multi-layer perceptrons), as described in further detail below with respect toFIG. 5 . -
FIG. 5 is a schematic illustration of the basic operations and functionality of one embodiment of anapplication environment module 500 according to the present invention, wherein theapplication environment module 500 is any of the application environments 301-303 depicted inFIG. 3 . In one embodiment, theapplication environment 500 comprises at least anautonomic manager element 502, an initialvalue function module 504, a systemlog data module 506, a reward-based learning (e.g., Reinforcement Learning)module 508 and a trainedvalue function module 510. Interactions of theapplication environment 500 with its SLA 514, its client demand, its currently allocated resources, and with theresource arbiter element 512, are depicted as they were inFIG. 3 . - In one embodiment, the SLA comprises the reward mechanism as described with respect to
FIG. 4 . In one embodiment, the SLA specifies payments to be made by the client based on mean end-to-end response time averages over the resource allocation time interval. Additionally, the client workload provided to theapplication environment 500 may be divided into a number of service classes (e.g., Gold, Silver and Bronze), and the SLA may specify a total payment, summed over the service classes, based on details of response time characteristics within each service class. - The initial
value function module 504 provides the basis for an initial policy (as described with respect toFIG. 4 ) to theautonomic manager 502 for application in theapplication environment 500. In one embodiment pertaining to open-loop traffic, the initial value function is based on a parallel M/M/1 queuing methodology, which estimates, in the current application state, how a hypothetical change in the number of assigned servers would change anticipated mean response time (and thereby change the anticipated utility as defined by the SLA 514). In another embodiment pertaining to closed-loop traffic, a Mean Value Analysis is used in place of the parallel M/M/1 queuing model. - In accordance with application of this initial policy, the
autonomic manager 502 reports observations (i.e., state/action/reward tuples) to the systemlog data module 506, which logs the observations as training data for the reward-basedlearning module 508. In one embodiment, the application environment state, s, at time t comprises the average demand (e.g., number of page requests per second) at time t, the mean response time at time t, the mean queue length at time t and the previous resource level assigned at time t−1. - The system
log data module 506 provides training data (logged observations) to the reward-basedlearning module 508, which applies an reward-based learning algorithm to the training data in order to learn a new value function Q(s, n) that estimates the long-term value of the allocation of a specified resource (e.g., n servers) to the application environment operating in its current state s. In one embodiment, the new value function is trained using the Sarsa(0) algorithm as described above with respect to EQN. 1. In one embodiment, the new value function Q(s, n) is represented by a standard multi-layer perceptron function approximator comprising one input unit per state variable in the state description at time t, one input unit to represent the resource level (e.g., number of servers) assigned at time t, a single hidden layer comprising twelve sigmoidal hidden units and a single linear output unit estimating the long-range value function Q(s, n). In one embodiment, iteration of the reward-based learning process consists of training the multi-layer perceptron on the training data by repeatedly performing a series of steps until a maximum number of training steps has been reached. In one embodiment, a random time step, t, is selected, where 0≦t≦T, such that input to the multi-layer perceptron comprises one of the training observations (st, at) and the current output value estimate is Q(st, at). An output error signal Δ Q(st, at) is then computed in accordance with EQN. 1. This error signal is back-propagated using a back-propagation training algorithm to compute small additive positive or negative changes in the weight values of the multi-layer perceptron. These weight values are then changed by adding the computed small changes. - Upon termination of the reward-based learning process in reward-based
learning module 508, the trained value function is extracted in the trainedvalue function module 510 and in turn conveyed to theautonomic manager 502. As described above, theautonomic manager 502 may utilize this new trained value function in place of the initial (queuing model-based) value function when reporting resource valuation estimates to theresource arbiter 512. In one embodiment, each of the application environments (e.g.,application environments FIG. 3 ) will utilize the same reward-based learning process to derive respective new value functions, and each application environment will replace their respective initial value functions with their respective new trained value functions in a substantially simultaneous manner. It is then expected that the decisions of theresource arbiter 512, which processes value functions received from each application environment in order to compute a globally optimal resource allocation, will be improved on average by applying the trained value functions in place of the initial value functions within each of the application environments. - Although the
application environment 500 is illustrated as including discrete modules for system log data and the new (trained) value function, those skilled in the art will appreciate that theautonomic manager 502 may provide system log data directly to the reward-basedlearning module 508, without the assistance of the systemlog data module 506. Similarly, the reward-basedlearning module 508 may report the new trained value function directly to theautonomic manager 502, bypassing the trainedvalue function module 510. - Referring back to
FIG. 2 , those skilled in the art will also appreciate that the methods of the present invention (e.g., as embodied in the system management module 205) can be represented by one or more software applications (or even a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC)), where the software is loaded from a storage medium (e.g., I/O devices 206) and operated by theprocessor 202 in thememory 204 of the generalpurpose computing device 200. Thus, in one embodiment, thesystem management module 205 implementing the invention described herein with reference to the preceding Figures can be stored on a computer readable medium or carrier (e.g., RAM, magnetic or optical drive or diskette, and the like). - The functionalities of the arbiters and the application environments described with reference to
FIGS. 3 and 5 may be performed by software modules of various types. For example, in one embodiment, the arbiters and/or application environments comprise autonomic elements. In another embodiment, the arbiters and/or application environments comprise autonomous agents software as may be constructed, for example, using the Agent Building and Learning Environment (ABLE). The arbiters and/or application environments may all run on a single computer, or they may run independently on different computers. Communication between the arbiters and the application environments may take place using standard interfaces and communication protocols. In the case of arbiters and application environments running on different computers, standard network interfaces and communication protocols may be employed, such as Web Services interfaces (e.g., those employed in the Open Grid Services Architecture (OGSA)). - Thus, the present invention represents a significant advancement in the field of systems management. The present invention enables the learning of high-quality management policies without an explicit performance model or traffic model, and with little or no built-in system-specific knowledge, by applying reward-based learning. Moreover, off-line application of the reward-based algorithm on application log data substantially avoids poor performance issues typically associated with live on-line training, while scalability is enhanced by the by the use of nonlinear function approximators (e.g., multi-layer perceptrons).
- While foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (20)
1. A method for learning a policy for management of at least one component of a data processing system, the method comprising:
obtaining a decision-making entity for managing said at least one component;
obtaining a reward mechanism for generating numerical measures of value responsive to at least one action performed in at least one state of said at least one component;
applying said decision-making entity and said reward mechanism to said at least one component;
processing a result achieved through application of said decision-making entity and said reward mechanism in accordance with reward-based learning; and
deriving said policy in accordance with said reward-based learning processing.
2. The method of claim 1 , wherein a performance measure associated with application of said reward mechanism and said policy to said at least one component is greater than a performance measure associated with application of said reward mechanism and said decision-making entity to said at least one component in at least one state of said at least one component.
3. The method of claim 1 , further comprising:
applying said reward mechanism and said policy to said at least one component.
4. The method of claim 3 , further comprising:
processing a result achieved through application of said reward mechanism and said policy in accordance with said reward-based learning; and
deriving a new policy in accordance with said reward-based learning processing.
5. The method of claim 1 , wherein said decision-making entity comprises at least one of: a human administrator, a rule-based method or a system model-based method.
6. The method of claim 1 , wherein said result comprises at least one observed state of said at least one component, at least one observed action responsive to said decision-making entity and at least one observed reward generated by said reward mechanism.
7. The method of claim 6 , wherein said result further comprises at least one observed transition of said at least one component from an initial state to a subsequent state.
8. The method of claim 7 , wherein said result further comprises at least one observed result of an internal calculation performed by said decision-making entity.
9. The method of claim 7 , wherein said result of said internal calculation is one or more expected-value estimates.
10. The method of claim 6 , wherein said result further comprises at least one exploratory off-policy action that differs from actions responsive to said decision-making entity.
11. The method of claim 1 , wherein said processing comprises:
learning a state transition model;
learning an expected-reward model; and
deriving said policy in accordance with said state transition model and said expected-reward model.
12. The method of claim 1 , wherein said reward-based learning comprises reinforcement learning.
13. The method of claim 12 , wherein said reinforcement learning comprises at least one of: value-function learning, actor-critic learning or direct policy learning.
14. The method of claim 1 , wherein said processing is performed off-line.
15. The method of claim 1 , wherein said processing is performed on-line.
16. The method of claim 1 , wherein said policy is applied to the allocation of one or more computing resources available to said at least one component, said one or more computing resources comprising at least one of: a physical computing resource, a virtual computing resource or power to a physical computing device.
17. A computer readable medium containing an executable program for learning a policy for management of at least one component of a data processing system, where the program performs the steps of:
obtaining a decision-making entity for managing said at least one component;
obtaining a reward mechanism for generating numerical measures of value responsive to at least one action performed in at least one state of said at least one component;
applying said decision-making entity and said reward mechanism to said at least one component;
processing a result achieved through application of said decision-making entity and said reward mechanism in accordance with reward-based learning; and
deriving said policy in accordance with said reward-based learning processing.
18. The computer readable medium of claim 17 , wherein a performance measure associated with application of said reward mechanism and said policy to said at least one component is greater than a performance measure associated with application of said reward mechanism and said decision-making entity to said at least one component in at least one state of said at least one component.
19. The computer readable medium of claim 17 , wherein said reward-based learning comprises reinforcement learning.
20. Apparatus for learning a policy for management of at least one component of a data processing system, the apparatus comprising:
means for obtaining a decision-making entity for managing said at least one component;
means for obtaining a reward mechanism for generating numerical measures of value responsive to at least one action performed in at least one state of said at least one component;
means for applying said decision-making entity and said reward mechanism to said at least one component;
means for processing a result achieved through application of said decision-making entity and said reward mechanism in accordance with reward-based learning; and
means for deriving said policy in accordance with said reward-based learning processing.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/337,311 US20070203871A1 (en) | 2006-01-23 | 2006-01-23 | Method and apparatus for reward-based learning of improved systems management policies |
US12/165,144 US8001063B2 (en) | 2006-01-23 | 2008-06-30 | Method and apparatus for reward-based learning of improved policies for management of a plurality of application environments supported by a data processing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/337,311 US20070203871A1 (en) | 2006-01-23 | 2006-01-23 | Method and apparatus for reward-based learning of improved systems management policies |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/165,144 Continuation US8001063B2 (en) | 2006-01-23 | 2008-06-30 | Method and apparatus for reward-based learning of improved policies for management of a plurality of application environments supported by a data processing system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070203871A1 true US20070203871A1 (en) | 2007-08-30 |
Family
ID=38445235
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/337,311 Abandoned US20070203871A1 (en) | 2006-01-23 | 2006-01-23 | Method and apparatus for reward-based learning of improved systems management policies |
US12/165,144 Expired - Fee Related US8001063B2 (en) | 2006-01-23 | 2008-06-30 | Method and apparatus for reward-based learning of improved policies for management of a plurality of application environments supported by a data processing system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/165,144 Expired - Fee Related US8001063B2 (en) | 2006-01-23 | 2008-06-30 | Method and apparatus for reward-based learning of improved policies for management of a plurality of application environments supported by a data processing system |
Country Status (1)
Country | Link |
---|---|
US (2) | US20070203871A1 (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110219068A1 (en) * | 2010-03-02 | 2011-09-08 | International Busiiness Machines Corporation | Flexible Delegation of Management Function For Self-Managing Resources |
US20110229864A1 (en) * | 2009-10-02 | 2011-09-22 | Coreculture Inc. | System and method for training |
US20130185039A1 (en) * | 2012-01-12 | 2013-07-18 | International Business Machines Corporation | Monte-carlo planning using contextual information |
US20170186125A1 (en) * | 2008-06-30 | 2017-06-29 | Autonomous Solutions, Inc. | Vehicle dispatching method and system |
US20170308489A1 (en) * | 2014-09-23 | 2017-10-26 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Speculative and iterative execution of delayed data flow graphs |
WO2017188968A1 (en) | 2016-04-29 | 2017-11-02 | Hewlett Packard Enterprise Development Lp | Storage device failure policies |
EP3279820A1 (en) * | 2016-08-01 | 2018-02-07 | Siemens Healthcare GmbH | Medical scanner teaches itself to optimize clinical protocols and image acquisition |
US20180101473A1 (en) * | 2014-10-13 | 2018-04-12 | Microsoft Technology Licensing, Llc | Application testing |
US20180164756A1 (en) * | 2016-12-14 | 2018-06-14 | Fanuc Corporation | Control system and machine learning device |
US10445653B1 (en) * | 2014-08-07 | 2019-10-15 | Deepmind Technologies Limited | Evaluating reinforcement learning policies |
CN111159063A (en) * | 2019-12-25 | 2020-05-15 | 大连理工大学 | Cache allocation method for multi-layer Sketch network measurement |
WO2020190460A1 (en) * | 2019-03-20 | 2020-09-24 | Sony Corporation | Reinforcement learning through a double actor critic algorithm |
US10839302B2 (en) | 2015-11-24 | 2020-11-17 | The Research Foundation For The State University Of New York | Approximate value iteration with complex returns by bounding |
US10885432B1 (en) * | 2015-12-16 | 2021-01-05 | Deepmind Technologies Limited | Selecting actions from large discrete action sets using reinforcement learning |
WO2021076222A1 (en) * | 2019-10-15 | 2021-04-22 | UiPath, Inc. | Reinforcement learning in robotic process automation |
CN112769594A (en) * | 2020-12-14 | 2021-05-07 | 北京邮电大学 | Intra-network service function deployment method based on multi-agent reinforcement learning |
US11037058B2 (en) * | 2018-08-27 | 2021-06-15 | Vmware, Inc. | Transferable training for automated reinforcement-learning-based application-managers |
US11153375B2 (en) * | 2019-09-30 | 2021-10-19 | Adobe Inc. | Using reinforcement learning to scale queue-based services |
US11157488B2 (en) * | 2017-12-13 | 2021-10-26 | Google Llc | Reinforcement learning techniques to improve searching and/or to conserve computational and network resources |
US11429892B2 (en) * | 2018-03-23 | 2022-08-30 | Adobe Inc. | Recommending sequences of content with bootstrapped reinforcement learning |
US11568236B2 (en) | 2018-01-25 | 2023-01-31 | The Research Foundation For The State University Of New York | Framework and methods of diverse exploration for fast and safe policy improvement |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8589423B2 (en) | 2011-01-18 | 2013-11-19 | Red 5 Studios, Inc. | Systems and methods for generating enhanced screenshots |
US8959042B1 (en) * | 2011-04-18 | 2015-02-17 | The Boeing Company | Methods and systems for estimating subject cost from surveillance |
US8756177B1 (en) * | 2011-04-18 | 2014-06-17 | The Boeing Company | Methods and systems for estimating subject intent from surveillance |
US8793313B2 (en) | 2011-09-08 | 2014-07-29 | Red 5 Studios, Inc. | Systems, methods and media for distributing peer-to-peer communications |
JP5879899B2 (en) * | 2011-10-12 | 2016-03-08 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US8628424B1 (en) | 2012-06-28 | 2014-01-14 | Red 5 Studios, Inc. | Interactive spectator features for gaming environments |
US8632411B1 (en) * | 2012-06-28 | 2014-01-21 | Red 5 Studios, Inc. | Exchanging virtual rewards for computing resources |
US8834268B2 (en) | 2012-07-13 | 2014-09-16 | Red 5 Studios, Inc. | Peripheral device control and usage in a broadcaster mode for gaming environments |
US8795086B2 (en) | 2012-07-20 | 2014-08-05 | Red 5 Studios, Inc. | Referee mode within gaming environments |
US9418059B1 (en) * | 2013-02-28 | 2016-08-16 | The Boeing Company | Methods and systems for processing natural language for machine learning |
WO2017218699A1 (en) * | 2016-06-17 | 2017-12-21 | Graham Leslie Fyffe | System and methods for intrinsic reward reinforcement learning |
WO2018081833A1 (en) * | 2016-10-31 | 2018-05-03 | Talla, Inc. | State machine methods and apparatus executing natural language communications, and al agents monitoring status and triggering transitions |
WO2018146770A1 (en) * | 2017-02-09 | 2018-08-16 | 三菱電機株式会社 | Position control device and position control method |
US11042640B2 (en) * | 2018-08-27 | 2021-06-22 | Vmware, Inc. | Safe-operation-constrained reinforcement-learning-based application manager |
US10802864B2 (en) * | 2018-08-27 | 2020-10-13 | Vmware, Inc. | Modular reinforcement-learning-based application manager |
KR102309682B1 (en) * | 2019-01-22 | 2021-10-07 | (주)티비스톰 | Method and platform for providing ai entities being evolved through reinforcement machine learning |
US11577859B1 (en) | 2019-10-08 | 2023-02-14 | Rockwell Collins, Inc. | Fault resilient airborne network |
US11469797B2 (en) | 2020-04-03 | 2022-10-11 | Samsung Electronics Co., Ltd | Rank indicator (RI) and channel quality indicator (CQI) estimation using a multi-layer perceptron (MLP) |
US11860589B2 (en) | 2021-01-05 | 2024-01-02 | Honeywell International Inc. | Method and apparatus for tuning a regulatory controller |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6230200B1 (en) * | 1997-09-08 | 2001-05-08 | Emc Corporation | Dynamic modeling for resource allocation in a file server |
US20050141554A1 (en) * | 2003-12-29 | 2005-06-30 | Intel Corporation | Method and system for dynamic resource allocation |
US20060179383A1 (en) * | 2005-01-21 | 2006-08-10 | Microsoft Corporation | Extending test sequences to accepting states |
US20060224535A1 (en) * | 2005-03-08 | 2006-10-05 | Microsoft Corporation | Action selection for reinforcement learning using influence diagrams |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU3477397A (en) * | 1996-06-04 | 1998-01-05 | Paul J. Werbos | 3-brain architecture for an intelligent decision and control system |
US7490058B2 (en) * | 2001-03-29 | 2009-02-10 | International Business Machines Corporation | Automated dynamic negotiation of electronic service contracts |
EP1476834A1 (en) * | 2002-02-07 | 2004-11-17 | Thinkdynamics Inc. | Method and system for managing resources in a data center |
US7873527B2 (en) * | 2003-05-14 | 2011-01-18 | International Business Machines Corporation | Insurance for service level agreements in e-utilities and other e-service environments |
US7437719B2 (en) * | 2003-09-30 | 2008-10-14 | Intel Corporation | Combinational approach for developing building blocks of DSP compiler |
US8346909B2 (en) * | 2004-01-22 | 2013-01-01 | International Business Machines Corporation | Method for supporting transaction and parallel application workloads across multiple domains based on service level agreements |
US20070006278A1 (en) * | 2005-06-29 | 2007-01-04 | Ioan Avram Mircea S | Automated dissemination of enterprise policy for runtime customization of resource arbitration |
-
2006
- 2006-01-23 US US11/337,311 patent/US20070203871A1/en not_active Abandoned
-
2008
- 2008-06-30 US US12/165,144 patent/US8001063B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6230200B1 (en) * | 1997-09-08 | 2001-05-08 | Emc Corporation | Dynamic modeling for resource allocation in a file server |
US20050141554A1 (en) * | 2003-12-29 | 2005-06-30 | Intel Corporation | Method and system for dynamic resource allocation |
US20060179383A1 (en) * | 2005-01-21 | 2006-08-10 | Microsoft Corporation | Extending test sequences to accepting states |
US20060224535A1 (en) * | 2005-03-08 | 2006-10-05 | Microsoft Corporation | Action selection for reinforcement learning using influence diagrams |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220114690A1 (en) * | 2008-06-30 | 2022-04-14 | Autonomous Solutions, Inc. | Vehicle dispatching method and system |
US20170186125A1 (en) * | 2008-06-30 | 2017-06-29 | Autonomous Solutions, Inc. | Vehicle dispatching method and system |
US20170185087A1 (en) * | 2008-06-30 | 2017-06-29 | Autonomous Solutions, Inc. | Vehicle dispatching method and system |
US11100601B2 (en) * | 2008-06-30 | 2021-08-24 | Autonomous Solutions Inc. | Vehicle dispatching method and system |
US11049208B2 (en) * | 2008-06-30 | 2021-06-29 | Autonomous Solutions, Inc. | Vehicle dispatching method and system |
US20110229864A1 (en) * | 2009-10-02 | 2011-09-22 | Coreculture Inc. | System and method for training |
US8719400B2 (en) * | 2010-03-02 | 2014-05-06 | International Business Machines Corporation | Flexible delegation of management function for self-managing resources |
US9479389B2 (en) | 2010-03-02 | 2016-10-25 | International Business Machines Corporation | Flexible delegation of management function for self-managing resources |
US20110219068A1 (en) * | 2010-03-02 | 2011-09-08 | International Busiiness Machines Corporation | Flexible Delegation of Management Function For Self-Managing Resources |
US20130185039A1 (en) * | 2012-01-12 | 2013-07-18 | International Business Machines Corporation | Monte-carlo planning using contextual information |
US9047423B2 (en) * | 2012-01-12 | 2015-06-02 | International Business Machines Corporation | Monte-Carlo planning using contextual information |
US11429898B1 (en) | 2014-08-07 | 2022-08-30 | Deepmind Technologies Limited | Evaluating reinforcement learning policies |
US10445653B1 (en) * | 2014-08-07 | 2019-10-15 | Deepmind Technologies Limited | Evaluating reinforcement learning policies |
US10394729B2 (en) * | 2014-09-23 | 2019-08-27 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Speculative and iterative execution of delayed data flow graphs |
US20170308489A1 (en) * | 2014-09-23 | 2017-10-26 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Speculative and iterative execution of delayed data flow graphs |
US11182280B2 (en) * | 2014-10-13 | 2021-11-23 | Microsoft Technology Licensing, Llc | Application testing |
US20180101473A1 (en) * | 2014-10-13 | 2018-04-12 | Microsoft Technology Licensing, Llc | Application testing |
US10839302B2 (en) | 2015-11-24 | 2020-11-17 | The Research Foundation For The State University Of New York | Approximate value iteration with complex returns by bounding |
US11907837B1 (en) | 2015-12-16 | 2024-02-20 | Deepmind Technologies Limited | Selecting actions from large discrete action sets using reinforcement learning |
US10885432B1 (en) * | 2015-12-16 | 2021-01-05 | Deepmind Technologies Limited | Selecting actions from large discrete action sets using reinforcement learning |
EP3278224A4 (en) * | 2016-04-29 | 2019-01-30 | Hewlett-Packard Enterprise Development LP | Storage device failure policies |
CN107636617A (en) * | 2016-04-29 | 2018-01-26 | 慧与发展有限责任合伙企业 | Storage device failure strategy |
WO2017188968A1 (en) | 2016-04-29 | 2017-11-02 | Hewlett Packard Enterprise Development Lp | Storage device failure policies |
US11468359B2 (en) * | 2016-04-29 | 2022-10-11 | Hewlett Packard Enterprise Development Lp | Storage device failure policies |
EP3279820A1 (en) * | 2016-08-01 | 2018-02-07 | Siemens Healthcare GmbH | Medical scanner teaches itself to optimize clinical protocols and image acquisition |
CN107680657A (en) * | 2016-08-01 | 2018-02-09 | 西门子保健有限责任公司 | Medical scanners learn by oneself optimization clinical protocol and IMAQ |
US10049301B2 (en) | 2016-08-01 | 2018-08-14 | Siemens Healthcare Gmbh | Medical scanner teaches itself to optimize clinical protocols and image acquisition |
US10564611B2 (en) * | 2016-12-14 | 2020-02-18 | Fanuc Corporation | Control system and machine learning device |
CN108227482A (en) * | 2016-12-14 | 2018-06-29 | 发那科株式会社 | Control system and machine learning device |
US20180164756A1 (en) * | 2016-12-14 | 2018-06-14 | Fanuc Corporation | Control system and machine learning device |
US11157488B2 (en) * | 2017-12-13 | 2021-10-26 | Google Llc | Reinforcement learning techniques to improve searching and/or to conserve computational and network resources |
US11568236B2 (en) | 2018-01-25 | 2023-01-31 | The Research Foundation For The State University Of New York | Framework and methods of diverse exploration for fast and safe policy improvement |
US11429892B2 (en) * | 2018-03-23 | 2022-08-30 | Adobe Inc. | Recommending sequences of content with bootstrapped reinforcement learning |
US11037058B2 (en) * | 2018-08-27 | 2021-06-15 | Vmware, Inc. | Transferable training for automated reinforcement-learning-based application-managers |
WO2020190460A1 (en) * | 2019-03-20 | 2020-09-24 | Sony Corporation | Reinforcement learning through a double actor critic algorithm |
US11816591B2 (en) | 2019-03-20 | 2023-11-14 | Sony Group Corporation | Reinforcement learning through a double actor critic algorithm |
US20210377340A1 (en) * | 2019-09-30 | 2021-12-02 | Adobe Inc. | Using reinforcement learning to scale queue-based services |
US11153375B2 (en) * | 2019-09-30 | 2021-10-19 | Adobe Inc. | Using reinforcement learning to scale queue-based services |
US11700302B2 (en) * | 2019-09-30 | 2023-07-11 | Adobe Inc. | Using reinforcement learning to scale queue-based services |
WO2021076222A1 (en) * | 2019-10-15 | 2021-04-22 | UiPath, Inc. | Reinforcement learning in robotic process automation |
US11775860B2 (en) | 2019-10-15 | 2023-10-03 | UiPath, Inc. | Reinforcement learning in robotic process automation |
CN111159063A (en) * | 2019-12-25 | 2020-05-15 | 大连理工大学 | Cache allocation method for multi-layer Sketch network measurement |
CN112769594A (en) * | 2020-12-14 | 2021-05-07 | 北京邮电大学 | Intra-network service function deployment method based on multi-agent reinforcement learning |
Also Published As
Publication number | Publication date |
---|---|
US8001063B2 (en) | 2011-08-16 |
US20090012922A1 (en) | 2009-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8001063B2 (en) | Method and apparatus for reward-based learning of improved policies for management of a plurality of application environments supported by a data processing system | |
Toka et al. | Machine learning-based scaling management for kubernetes edge clusters | |
Moreno-Vozmediano et al. | Efficient resource provisioning for elastic cloud services based on machine learning techniques | |
US8060454B2 (en) | Method and apparatus for improved reward-based learning using nonlinear dimensionality reduction | |
Song et al. | Trusted grid computing with security binding and trust integration | |
US9083734B1 (en) | Integrated forensics platform for analyzing IT resources consumed to derive operational and architectural recommendations | |
Hellerstein et al. | A statistical approach to predictive detection | |
US20050172291A1 (en) | Method and apparatus for utility-based dynamic resource allocation in a distributed computing system | |
WO2017189084A1 (en) | Method and system for applying dynamic and adaptive testing techniques to a software system to improve selection of predictive models for personalizing user experiences in the software system | |
US20090099985A1 (en) | Method and apparatus for improved reward-based learning using adaptive distance metrics | |
WO2017119997A1 (en) | Method and system for adjusting analytics model characteristics to reduce uncertainty in determining users' preferences for user experience options, to support providing personalized user experiences to users with a software system | |
US11579933B2 (en) | Method for establishing system resource prediction and resource management model through multi-layer correlations | |
US20070118631A1 (en) | Method and apparatus for performance and policy analysis in distributed computing systems | |
Bahrpeyma et al. | An adaptive RL based approach for dynamic resource provisioning in Cloud virtualized data centers | |
KR20180027995A (en) | Method and apparatus for future prediction in Internet of thing | |
de Gyvés Avila et al. | Fuzzy logic based QoS optimization mechanism for service composition | |
Wu et al. | Intent-driven cloud resource design framework to meet cloud performance requirements and its application to a cloud-sensor system | |
Nawrocki et al. | Data-driven adaptive prediction of cloud resource usage | |
Gyeera et al. | Regression Analysis of Predictions and Forecasts of Cloud Data Center KPIs Using the Boosted Decision Tree Algorithm | |
Mounzer et al. | Dynamic control and mitigation of interdependent IT security risks | |
US10515381B2 (en) | Spending allocation in multi-channel digital marketing | |
Martinez-Julia et al. | Explained intelligent management decisions in virtual networks and network slices | |
Jaswal et al. | AFTM-agent based fault tolerance manager in cloud environment. | |
Li et al. | An automated VNF manager based on parameterized action MDP and reinforcement learning | |
Feng et al. | A large-scale holistic measurement of crowdsourced edge cloud platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TESAURO, GERALD J.;DAS, RAJARSHI;JONG, NICHOLAS K.;AND OTHERS;REEL/FRAME:017317/0258 Effective date: 20060120 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |