# 强化学习

--Ricky：这一词条的行文非常糟糕，但其对应的维基百科原文就是这么糟糕吧。。。强烈希望有第二个审校再审一遍，最好是有点强化学习的背景知识的伙伴。可以考虑用scholarpedia的相应词条作为原文本。


Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

Reinforcement learning differs from supervised learning in not needing labelled input/output pairs be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).[1]

Reinforcement learning differs from supervised learning in not needing labelled input/output pairs be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context utilize dynamic programming techniques.[2] The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.模板:Toclimit

The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context utilize dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.

## 介绍

The typical framing of a Reinforcement Learning (RL) scenario: an agent takes actions in an environment, which is interpreted into a reward and a representation of the state, which are fed back into the agent.
The typical framing of a Reinforcement Learning (RL) scenario: an agent takes actions in an environment, which is interpreted into a reward and a representation of the state, which are fed back into the agent.


Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.

Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.

Basic reinforcement is modeled as a Markov decision process:

Basic reinforcement is modeled as a Markov decision process:

• a set of environment and agent states, S;
• a set of actions, A, of the agent;
• $\displaystyle{ P_a(s,s')=\Pr(s_{t+1}=s'\mid s_t=s, a_t=a) }$ is the probability of transition (at time $\displaystyle{ t }$) from state $\displaystyle{ s }$ to state $\displaystyle{ s' }$ under action $\displaystyle{ a }$.
• $\displaystyle{ R_a(s,s') }$ is the immediate reward after transition from $\displaystyle{ s }$ to $\displaystyle{ s' }$ with action $\displaystyle{ a }$.
• rules that describe what the agent observes

• 一组环境和主体集合S;
• 一组主体的行为 A;
• Pa(s,s′)=Pr(st+1=s′∣st=s,at=a)是在时间t和行为a下从s到 s′ 状态转变的可能性

$\displaystyle{ P_a(s,s')=\Pr(s_{t+1}=s'\mid s_t=s, a_t=a) }$是在时间$\displaystyle{ t }$时，主体采取行动$\displaystyle{ a }$后，状态从$\displaystyle{ s }$$\displaystyle{ s' }$转移的概率

• $\displaystyle{ R_a(s,s') }$是从s到s′过渡所得到的即时收益
• 描述主体要遵守的规则

Rules are often stochastic. The observation typically involves the scalar, immediate reward associated with the last transition. In many works, the agent is assumed to observe the current environmental state (full observability). If not, the agent has partial observability. Sometimes the set of actions available to the agent is restricted (a zero balance cannot be reduced. For example, if the current value of the agent is 3 and the state transition reduces the value by 4, the transition will not be allowed).

Rules are often stochastic. The observation typically involves the scalar, immediate reward associated with the last transition. In many works, the agent is assumed to observe the current environmental state (full observability). If not, the agent has partial observability. Sometimes the set of actions available to the agent is restricted (a zero balance cannot be reduced. For example, if the current value of the agent is 3 and the state transition reduces the value by 4, the transition will not be allowed).

A reinforcement learning agent interacts with its environment in discrete time steps. At each time t, the agent receives an observation $\displaystyle{ o_t }$, which typically includes the reward $\displaystyle{ r_t }$. It then chooses an action $\displaystyle{ a_t }$ from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state $\displaystyle{ s_{t+1} }$ and the reward $\displaystyle{ r_{t+1} }$ associated with the transition $\displaystyle{ (s_t,a_t,s_{t+1}) }$ is determined. The goal of a reinforcement learning agent is to collect as much reward as possible. The agent can (possibly randomly) choose any action as a function of the history.

A reinforcement learning agent interacts with its environment in discrete time steps. At each time , the agent receives an observation $\displaystyle{ o_t }$, which typically includes the reward $\displaystyle{ r_t }$. It then chooses an action $\displaystyle{ a_t }$ from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state $\displaystyle{ s_{t+1} }$ and the reward $\displaystyle{ r_{t+1} }$ associated with the transition $\displaystyle{ (s_t,a_t,s_{t+1}) }$ is determined. The goal of a reinforcement learning agent is to collect as much reward as possible. The agent can (possibly randomly) choose any action as a function of the history.

When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of regret. In order to act near optimally, the agent must reason about the long term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.

When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of regret. In order to act near optimally, the agent must reason about the long term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.

Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon, checkers模板:Sfn and Go (AlphaGo).

Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon, checkers and Go (AlphaGo).

Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:

Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:

• A model of the environment is known, but an analytic solution is not available;
• Only a simulation model of the environment is given (the subject of simulation-based optimization);[3]
• The only way to collect information about the environment is to interact with it.
• 一种环境的模型是已知的，但是无法得到其数学解析式；
• 只给出了环境的模拟模型；[4]
• 收集环境信息的唯一方法就是去与环境交互；

The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.

The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.

## 探索

The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space MDPs in Burnetas and Katehakis (1997).[5]

The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space MDPs in Burnetas and Katehakis (1997).

Burnetas 和 Katehakis (1997)[6] 已经通过多臂老虎机问题非常彻底地研究了有限的状态空间MDPs中探索与利用的权衡。

Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.

Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.

One such method is $\displaystyle{ \epsilon }$-greedy, where $\displaystyle{ 0 \lt \epsilon \lt 1 }$ is a parameter controlling the amount of exploration vs. exploitation. With probability $\displaystyle{ 1-\epsilon }$, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability $\displaystyle{ \epsilon }$, exploration is chosen, and the action is chosen uniformly at random. $\displaystyle{ \epsilon }$ is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.[7]

One such method is $\displaystyle{ \epsilon }$-greedy, where $\displaystyle{ 0 \lt \epsilon \lt 1 }$ is a parameter controlling the amount of exploration vs. exploitation. With probability $\displaystyle{ 1-\epsilon }$, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability $\displaystyle{ \epsilon }$, exploration is chosen, and the action is chosen uniformly at random. $\displaystyle{ \epsilon }$ is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.

## 控制学习的算法

Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.

Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.

### 最优性的定义和标准

#### 策略

The agent's action selection is modeled as a map called policy:

The agent's action selection is modeled as a map called policy:

$\displaystyle{ \pi: A \times S \rightarrow [0,1] }$
$\displaystyle{ \pi(a,s) = \Pr(a_t = a\mid s_t =s) }$

The policy map gives the probability of taking action $\displaystyle{ a }$ when in state $\displaystyle{ s }$.[8]:61 There are also non-probabilistic policies.

The policy map gives the probability of taking action $\displaystyle{ a }$ when in state $\displaystyle{ s }$. There are also non-probabilistic policies.

#### “状态-价值” 函数

Value function $\displaystyle{ V_\pi(s) }$ is defined as the expected return starting with state $\displaystyle{ s }$, i.e. $\displaystyle{ s_0 = s }$, and successively following policy $\displaystyle{ \pi }$. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.[8]:60

Value function $\displaystyle{ V_\pi(s) }$ is defined as the expected return starting with state $\displaystyle{ s }$, i.e. $\displaystyle{ s_0 = s }$, and successively following policy $\displaystyle{ \pi }$. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.

$\displaystyle{ V_\pi(s) = \operatorname E[R] = \operatorname E\left[\sum_{t=0}^\infty \gamma^t r_t\mid s_0 = s\right], }$

where the random variable $\displaystyle{ R }$ denotes the return, and is defined as the sum of future discounted rewards模板:Clarify

where the random variable $\displaystyle{ R }$ denotes the return, and is defined as the sum of future discounted rewards

$\displaystyle{ R=\sum_{t=0}^\infty \gamma^t r_t, }$

where $\displaystyle{ r_t }$ is the reward at step $\displaystyle{ t }$, $\displaystyle{ \gamma \in [0,1) }$ is the discount-rate.

where $\displaystyle{ r_t }$ is the reward at step $\displaystyle{ t }$, $\displaystyle{ \gamma \in [0,1) }$ is the discount-rate.

$\displaystyle{ r_t }$是在时间步$\displaystyle{ t }$的收益，$\displaystyle{ \gamma \in [0,1) }$ 是折现率。

The algorithm must find a policy with maximum expected return. From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.

The algorithm must find a policy with maximum expected return. From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.

### 蛮力方法

The brute force approach entails two steps:

The brute force approach entails two steps:

• For each possible policy, sample returns while following it
• Choose the policy with the largest expected return
• 对于每一个可能的策略，记录下每次使用该策略的收益
• 选择具有最大期望收益的政策

One problem with this is that the number of policies can be large, or even infinite. Another is that variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.

One problem with this is that the number of policies can be large, or even infinite. Another is that variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.

These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.

These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.

### 价值函数

Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).

Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).

These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.

These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.

To define optimality in a formal manner, define the value of a policy $\displaystyle{ \pi }$ by

To define optimality in a formal manner, define the value of a policy $\displaystyle{ \pi }$ by

$\displaystyle{ V^{\pi} (s) = E[R\mid s,\pi], }$

where $\displaystyle{ R }$ stands for the return associated with following $\displaystyle{ \pi }$ from the initial state $\displaystyle{ s }$. Defining $\displaystyle{ V^*(s) }$ as the maximum possible value of $\displaystyle{ V^\pi(s) }$, where $\displaystyle{ \pi }$ is allowed to change,

where $\displaystyle{ R }$ stands for the return associated with following $\displaystyle{ \pi }$ from the initial state $\displaystyle{ s }$. Defining $\displaystyle{ V^*(s) }$ as the maximum possible value of $\displaystyle{ V^\pi(s) }$, where $\displaystyle{ \pi }$ is allowed to change,

$\displaystyle{ V^*(s) = \max_\pi V^\pi(s). }$

A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return $\displaystyle{ \rho^\pi }$, since $\displaystyle{ \rho^\pi = E[ V^\pi(S) ] }$, where $\displaystyle{ S }$ is a state randomly sampled from the distribution $\displaystyle{ \mu }$模板:Clarify.

A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return $\displaystyle{ \rho^\pi }$, since $\displaystyle{ \rho^\pi = E[ V^\pi(S) ] }$, where $\displaystyle{ S }$ is a state randomly sampled from the distribution $\displaystyle{ \mu }$.

Although state-values suffice to define optimality, it is useful to define action-values. Given a state $\displaystyle{ s }$, an action $\displaystyle{ a }$ and a policy $\displaystyle{ \pi }$, the action-value of the pair $\displaystyle{ (s,a) }$ under $\displaystyle{ \pi }$ is defined by

Although state-values suffice to define optimality, it is useful to define action-values. Given a state $\displaystyle{ s }$, an action $\displaystyle{ a }$ and a policy $\displaystyle{ \pi }$, the action-value of the pair $\displaystyle{ (s,a) }$ under $\displaystyle{ \pi }$ is defined by

$\displaystyle{ Q^\pi(s,a) = \operatorname E[R\mid s,a,\pi],\, }$

where $\displaystyle{ R }$ now stands for the random return associated with first taking action $\displaystyle{ a }$ in state $\displaystyle{ s }$ and following $\displaystyle{ \pi }$, thereafter.

where $\displaystyle{ R }$ now stands for the random return associated with first taking action $\displaystyle{ a }$ in state $\displaystyle{ s }$ and following $\displaystyle{ \pi }$, thereafter.

The theory of MDPs states that if $\displaystyle{ \pi^* }$ is an optimal policy, we act optimally (take the optimal action) by choosing the action from $\displaystyle{ Q^{\pi^*}(s,\cdot) }$ with the highest value at each state, $\displaystyle{ s }$. The action-value function of such an optimal policy ($\displaystyle{ Q^{\pi^*} }$) is called the optimal action-value function and is commonly denoted by $\displaystyle{ Q^* }$. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.

The theory of MDPs states that if $\displaystyle{ \pi^* }$ is an optimal policy, we act optimally (take the optimal action) by choosing the action from $\displaystyle{ Q^{\pi^*}(s,\cdot) }$ with the highest value at each state, $\displaystyle{ s }$. The action-value function of such an optimal policy ($\displaystyle{ Q^{\pi^*} }$) is called the optimal action-value function and is commonly denoted by $\displaystyle{ Q^* }$. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.

MDPs 理论指出，如果$\displaystyle{ \pi^* }$是一个最优策略，那么我们通过从 $\displaystyle{ Q^{\pi^*}(s,\cdot) }$ 中选择每个状态$\displaystyle{ s }$下的最大值来优化操作(采取最优操作)。这种最优策略的操作值函数（$\displaystyle{ Q^{\pi^*} }$）称为最优操作值函数，通常由$\displaystyle{ Q^* }$表示。总之，仅仅知道最优行动值函数就足以知道如何进行最优行动。

Assuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions $\displaystyle{ Q_k }$ ($\displaystyle{ k=0,1,2,\ldots }$) that converge to $\displaystyle{ Q^* }$. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.

Assuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions $\displaystyle{ Q_k }$ ($\displaystyle{ k=0,1,2,\ldots }$) that converge to $\displaystyle{ Q^* }$. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.

#### 蒙特卡罗方法

Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: policy evaluation and policy improvement.

Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: policy evaluation and policy improvement.

Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy $\displaystyle{ \pi }$, the goal is to compute the function values $\displaystyle{ Q^\pi(s,a) }$ (or a good approximation to them) for all state-action pairs $\displaystyle{ (s,a) }$. Assuming (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair $\displaystyle{ (s,a) }$ can be computed by averaging the sampled returns that originated from $\displaystyle{ (s,a) }$ over time. Given sufficient time, this procedure can thus construct a precise estimate $\displaystyle{ Q }$ of the action-value function $\displaystyle{ Q^\pi }$. This finishes the description of the policy evaluation step.

Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy $\displaystyle{ \pi }$, the goal is to compute the function values $\displaystyle{ Q^\pi(s,a) }$ (or a good approximation to them) for all state-action pairs $\displaystyle{ (s,a) }$. Assuming (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair $\displaystyle{ (s,a) }$ can be computed by averaging the sampled returns that originated from $\displaystyle{ (s,a) }$ over time. Given sufficient time, this procedure can thus construct a precise estimate $\displaystyle{ Q }$ of the action-value function $\displaystyle{ Q^\pi }$. This finishes the description of the policy evaluation step.

In the policy improvement step, the next policy is obtained by computing a greedy policy with respect to $\displaystyle{ Q }$: Given a state $\displaystyle{ s }$, this new policy returns an action that maximizes $\displaystyle{ Q(s,\cdot) }$. In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed.

In the policy improvement step, the next policy is obtained by computing a greedy policy with respect to $\displaystyle{ Q }$: Given a state $\displaystyle{ s }$, this new policy returns an action that maximizes $\displaystyle{ Q(s,\cdot) }$. In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed.

Problems with this procedure include:

Problems with this procedure include:

• The procedure may spend too much time evaluating a suboptimal policy.
• It uses samples inefficiently in that a long trajectory improves the estimate only of the single state-action pair that started the trajectory.
• When the returns along the trajectories have high variance, convergence is slow.
• It works in episodic problems only;
• It works in small, finite MDPs only.

• 程序可能会花过多的时间在评估次优策略上
• 它使用样本的效率很低，因为长过程只会提高对启动轨迹的那一个“状态-行动“元组的估计
• 当过程的收益具有“高方差”时，收敛速度较慢
• 它只适用于分阶段式问题
• 它只适用于小的，有限的马尔可夫决策。

#### 时序差分方法

The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor critic methods belong to this category.

The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor critic methods belong to this category.

TThe second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation.[9]模板:Sfn The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method,[10] may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.

The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.

In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping $\displaystyle{ \phi }$ that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair $\displaystyle{ (s,a) }$ are obtained by linearly combining the components of $\displaystyle{ \phi(s,a) }$ with some weights $\displaystyle{ \theta }$:

In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping $\displaystyle{ \phi }$ that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair $\displaystyle{ (s,a) }$ are obtained by linearly combining the components of $\displaystyle{ \phi(s,a) }$ with some weights $\displaystyle{ \theta }$:

$\displaystyle{ Q(s,a) = \sum_{i=1}^d \theta_i \phi_i(s,a). }$

The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.

The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.

Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants.[12]

The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called $\displaystyle{ \lambda }$ parameter $\displaystyle{ (0\le \lambda\le 1) }$ that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.

The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called $\displaystyle{ \lambda }$ parameter $\displaystyle{ (0\le \lambda\le 1) }$ that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.

### 直接策略搜索

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods.

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods.

Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector $\displaystyle{ \theta }$, let $\displaystyle{ \pi_\theta }$ denote the policy associated to $\displaystyle{ \theta }$. Defining the performance function by

Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector $\displaystyle{ \theta }$, let $\displaystyle{ \pi_\theta }$ denote the policy associated to $\displaystyle{ \theta }$. Defining the performance function by

$\displaystyle{ \rho(\theta) = \rho^{\pi_\theta}, }$

under mild conditions this function will be differentiable as a function of the parameter vector $\displaystyle{ \theta }$. If the gradient of $\displaystyle{ \rho }$ was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method[14] (which is known as the likelihood ratio method in the simulation-based optimization literature).[15] Policy search methods have been used in the robotics context.[16] Many policy search methods may get stuck in local optima (as they are based on local search).

A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum.

A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum.

Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems.[20]

Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems.

## 理论

Both the asymptotic and finite-sample behavior of most algorithms is well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.

Both the asymptotic and finite-sample behavior of most algorithms is well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.

Efficient exploration of MDPs is given in Burnetas and Katehakis (1997).[21]. Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.

Efficient exploration of MDPs is given in Burnetas and Katehakis (1997).. Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.

Burnetas 和 Katehakis (1997)[22]给出了高效探索 MDPs 的方法。许多算法的有限时间性能界限也已被找到，但这些界限相当宽松，人们还需要做更多的工作，以更好地了解相对优势和局限性。

For incremental algorithms, asymptotic convergence issues have been settled模板:Clarify. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).

For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).

## 研究

Research topics include

Research topics include

• adaptive methods that work with fewer (or no) parameters under a large number of conditions
• addressing the exploration problem in large MDPs
• large-scale empirical evaluations
• learning and acting under partial information (e.g., using predictive state representation)
• modular and hierarchical reinforcement learning[23]
• improving existing value-function and policy search methods
• algorithms that work well with large (or continuous) action spaces
• transfer learning[24]
• lifelong learning
• efficient sample-based planning (e.g., based on Monte Carlo tree search).
• bug detection in software projects[25]
• Intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours (typically) by introducing a reward function based on maximising novel information[26][27][28]
• Multiagent or distributed reinforcement learning is a topic of interest. Applications are expanding.[29]
• Actor-critic reinforcement learning
• Reinforcement learning algorithms such as TD learning are under investigation as a model for dopamine-based learning in the brain. In this model, the dopaminergic projections from the substantia nigra to the basal ganglia function as the prediction error. Reinforcement learning has been used as a part of the model for human skill learning, especially in relation to the interaction between implicit and explicit learning in skill acquisition (the first publication on this application was in 1995–1996).[30]

• 在诸多条件下使用较少（或没有）参数的自适应方法
• 解决在大规模MDPs里的探索问题
• 大规模实证评价
• 部分可观测马尔可夫决策过程下的学习与行动
• 模块化分层的强化学习
• 改进现有的价值函数和策略搜索方法
• 适用于大型（或连续）动作空间的算法
• 迁移学习[31]
• 终生学习
• 基于样本的高效规划算法（比如，基于蒙特卡洛搜索树的方法）
• 软件项目中的缺陷检测 s[32]
• 通过引入最大化新信息的奖励函数来区分主体（agent）信息收集等好奇行为，以及任务导向等有目标行为的不同内在动机 [26][27][28]
• 多主体或分布式强化学习也是令人感兴趣的话题，这方面的应用正在不断扩大。
• “行动者-批评家”强化学习
• 像 TD 学习这样的强化学习学习算法被研究用来建模大脑中基于多巴胺的学习模型。在这个模型中，从黑质到基底神经节的多巴胺能投射可以被视为预测错误。强化学习被用作是人类技能学习模型的一部分，特别是关于技能习得中内隐和外显学习之间的相互作用(这一应用的第一次出版是在1995-1996年)。

## 强化学习算法比较

Algorithm Description Model Policy Action Space State Space Operator
Monte Carlo Every visit to Monte Carlo Model-Free Either Discrete Discrete Sample-means
Q-learning State–action–reward–state Model-Free Off-policy Discrete Discrete Q-value
SARSA State–action–reward–state–action Model-Free On-policy Discrete Discrete Q-value
Q-learning - Lambda State–action–reward–state with eligibility traces Model-Free Off-policy Discrete Discrete Q-value
SARSA - Lambda State–action–reward–state–action with eligibility traces Model-Free On-policy Discrete Discrete Q-value
DQN Deep Q Network Model-Free Off-policy Discrete Continuous Q-value
DDPG Deep Deterministic Policy Gradient Model-Free Off-policy Continuous Continuous Q-value
TRPO Trust Region Policy Optimization Model-Free On-policy Continuous Continuous Advantage
PPO Proximal Policy Optimization Model-Free On-policy Continuous Continuous Advantage
TD3 Twin Delayed Deep Deterministic Policy Gradient Model-Free Off-policy Continuous Continuous Q-value
SAC Soft Actor-Critic Model-Free Off-policy Continuous Continuous Advantage

Q-learning 状态–行动–奖励–状态 无模型 Off 离散 离散 Q值
SARSA 状态–行动–奖励–状态-行动 无模型 On 离散 离散 Q值
Q-learning - Lambda 带资格痕迹的状态–行动–奖励–状态 无模型 Off 离散 离散 Q值
SARSA - Lambda 带资格痕迹的状态–行动–奖励–状态-行动 无模型 On 离散 离散 Q值
DQN 深度Q网络 无模型 Off 离散 连续 Q值
DDPG 深度确定性策略梯度 无模型 Off 连续 连续 Q值
A3C 异步优势行“动者-批评者”算法 无模型 On 连续 连续 优势（Advantage）
NAF 带归一化优势函数的Q-学习 无模型 Off 连续 连续 优势
TRPO 信任区域策略的最优化 无模型 On 连续 连续 优势
PPO 近端策略最优化 无模型 On 连续 连续 优势
TD3 双延迟深度确定性策略梯度 无模型 Off 连续 连续 Q-值
SAC 软“行动者-批评者” 无模型 Off 连续 连续 优势

### 深度强度学习

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space.[33] The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning.[34]

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning.

### 逆强化学习

In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.[35]

In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.

### 学徒学习

In apprenticeship learning, an expert demonstrates the target behavior. The system tries to recover the policy via observation.

In apprenticeship learning, an expert demonstrates the target behavior. The system tries to recover the policy via observation.

q-学习

SARSA算法

## References

1. Kaelbling, Leslie P.; Littman, Michael L.; Moore, Andrew W. (1996). "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research. 4: 237–285. arXiv:cs/9605103. doi:10.1613/jair.301. Archived from the original on 2001-11-20.
2. van Otterlo, M.; Wiering, M. (2012). Reinforcement learning and markov decision processes. Adaptation, Learning, and Optimization. 12. pp. 3–42. doi:10.1007/978-3-642-27645-3_1. ISBN 978-3-642-27644-6.
3. Gosavi, Abhijit (2003). Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement. Operations Research/Computer Science Interfaces Series. Springer. ISBN 978-1-4020-7454-7.
4. Gosavi, Abhijit (2003). Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement. Operations Research/Computer Science Interfaces Series. Springer. ISBN 978-1-4020-7454-7.
5. Burnetas, Apostolos N.; Katehakis, Michael N. (1997), "Optimal adaptive policies for Markov Decision Processes", Mathematics of Operations Research, 22: 222–255, doi:10.1287/moor.22.1.222
6. Burnetas, Apostolos N.; Katehakis, Michael N. (1997), "Optimal adaptive policies for Markov Decision Processes", Mathematics of Operations Research, 22: 222–255, doi:10.1287/moor.22.1.222
7. Tokic, Michel; Palm, Günther (2011), "Value-Difference Based Exploration: Adaptive Control Between Epsilon-Greedy and Softmax" (PDF), KI 2011: Advances in Artificial Intelligence, Lecture Notes in Computer Science, 7006, Springer, pp. 335–346, ISBN 978-3-642-24455-1
8. Sutton, Richard S. (1984). Temporal Credit Assignment in Reinforcement Learning (PhD thesis). University of Massachusetts, Amherst, MA.
9. Bradtke, Steven J.; Barto, Andrew G. (1996). "Learning to predict by the method of temporal differences". Machine Learning. 22: 33–57. CiteSeerX 10.1.1.143.857. doi:10.1023/A:1018056104778. Unknown parameter |s2cid= ignored (help)
10. Bradtke, Steven J.; Barto, Andrew G. (1996). "Learning to predict by the method of temporal differences". Machine Learning. 22: 33–57. CiteSeerX 10.1.1.143.857. doi:10.1023/A:1018056104778. Unknown parameter |s2cid= ignored (help)
11. Watkins, Christopher J.C.H. (1989). Learning from Delayed Rewards (PDF) (PhD thesis). King’s College, Cambridge, UK.
12. Watkins, Christopher J.C.H. (1989). Learning from Delayed Rewards (PDF) (PhD thesis). King’s College, Cambridge, UK.
13. Williams, Ronald J. (1987). "A class of gradient-estimating algorithms for reinforcement learning in neural networks". Proceedings of the IEEE First International Conference on Neural Networks. CiteSeerX 10.1.1.129.8871.
14. Peters, Jan; Vijayakumar, Sethu; Schaal, Stefan (2003). "Reinforcement Learning for Humanoid Robotics" (PDF). IEEE-RAS International Conference on Humanoid Robots.
15. Deisenroth, Marc Peter; Neumann, Gerhard; Peters, Jan (2013). A Survey on Policy Search for Robotics. Foundations and Trends in Robotics. 2. NOW Publishers. pp. 1–142. doi:10.1561/2300000021.
16. Williams, Ronald J. (1987). "A class of gradient-estimating algorithms for reinforcement learning in neural networks". Proceedings of the IEEE First International Conference on Neural Networks. CiteSeerX 10.1.1.129.8871.
17. Peters, Jan; Vijayakumar, Sethu; Schaal, Stefan (2003). "Reinforcement Learning for Humanoid Robotics" (PDF). IEEE-RAS International Conference on Humanoid Robots.
18. Deisenroth, Marc Peter; Neumann, Gerhard; Peters, Jan (2013). A Survey on Policy Search for Robotics. Foundations and Trends in Robotics. 2. NOW Publishers. pp. 1–142. doi:10.1561/2300000021.
19. Juliani, Arthur (2016-12-17). "Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C)". Medium. Retrieved 2018-02-22.
20. Burnetas, Apostolos N.; Katehakis, Michael N. (1997), "Optimal adaptive policies for Markov Decision Processes", Mathematics of Operations Research, 22: 222–255, doi:10.1287/moor.22.1.222
21. Burnetas, Apostolos N.; Katehakis, Michael N. (1997), "Optimal adaptive policies for Markov Decision Processes", Mathematics of Operations Research, 22: 222–255, doi:10.1287/moor.22.1.222
22. Kulkarni, Tejas D.; Narasimhan, Karthik R.; Saeedi, Ardavan; Tenenbaum, Joshua B. (2016). "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation". Proceedings of the 30th International Conference on Neural Information Processing Systems. NIPS'16. USA: Curran Associates Inc.: 3682–3690. arXiv:1604.06057. Bibcode:2016arXiv160406057K. ISBN 978-1-5108-3881-9.
23. George Karimpanal, Thommen; Bouffanais, Roland (2019). "Self-organizing maps for storage and transfer of knowledge in reinforcement learning". Adaptive Behavior (in English). 27 (2): 111–126. arXiv:1811.08318. doi:10.1177/1059712318818568. ISSN 1059-7123.
24. "On the Use of Reinforcement Learning for Testing Game Mechanics : ACM - Computers in Entertainment". cie.acm.org (in English). Retrieved 2018-11-27.
25. Kaplan, F. and Oudeyer, P. (2004). Maximizing learning progress: an internal reward system for development. Embodied artificial intelligence, pages 629–629.
26. Klyubin, A., Polani, D., and Nehaniv, C. (2008). Keep your options open: an information-based driving principle for sensorimotor systems. PloS ONE, 3(12):e4018. https://dx.doi.org/10.1371%2Fjournal.pone.0004018
27. Barto, A. G. (2013). “Intrinsic motivation and reinforcement learning,” in Intrinsically Motivated Learning in Natural and Artificial Systems (Berlin; Heidelberg: Springer), 17–47
28. "Reinforcement Learning / Successes of Reinforcement Learning". umichrl.pbworks.com. Retrieved 2017-08-06.
29. [1] -{zh-cn:互联网档案馆; zh-tw:網際網路檔案館; zh-hk:互聯網檔案館;}-存檔，存档日期2017-04-26.
30. George Karimpanal, Thommen; Bouffanais, Roland (2019). "Self-organizing maps for storage and transfer of knowledge in reinforcement learning". Adaptive Behavior (in English). 27 (2): 111–126. arXiv:1811.08318. doi:10.1177/1059712318818568. ISSN 1059-7123.
31. "On the Use of Reinforcement Learning for Testing Game Mechanics : ACM - Computers in Entertainment". cie.acm.org (in English). Retrieved 2018-11-27.
32. Francois-Lavet, Vincent; et al. (2018). "An Introduction to Deep Reinforcement Learning". Foundations and Trends in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560. Bibcode:2018arXiv181112560F. doi:10.1561/2200000071.
33. Mnih, Volodymyr; et al. (2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M. doi:10.1038/nature14236. PMID 25719670.
34. Ng, A. Y.; Russell, S. J. (2000). "Algorithms for Inverse Reinforcement Learning". Proceeding ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning. pp. 663–670. ISBN 1-55860-707-2.
35. Ng, A. Y.; Russell, S. J. (2000). "Algorithms for Inverse Reinforcement Learning". Proceeding ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning. pp. 663–670. ISBN 1-55860-707-2.