RL(Chapter 1): The Reinforcement Learning Problem

This article is an intensive study note, mainly refer to the following content:

Gym library

At present, the common method used for reinforcement learning programming practice is to use the gym library launched by OpenAI

Insert picture description here

A big feature of the gym library is that it can be visualized, and the human-computer interaction of the reinforcement learning algorithm is presented in the form of animation.

Reinforcement Learning (RL)

Characteristics of RL:

  • Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal.
  • Moreover, the learner is not told which actions to take, but instead must discover which actions yield the most reward by trying them out. (试错)

One of the challenges that arise in reinforcement learning is the trade-off between exploration and exploitation (试探与开发). The agent has to exploit what it already knows in order to obtain reward, but it also has to explore in order to make better action selections in the future.

  • In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. (延迟收益)
  • It explicitly considers the w h o l e whole whole problem of a goal-directed agent interacting with an uncertain environment.

A full specication of reinforcement learning problems in terms of optimal control of Markov decision processes must wait until Chapter 3, but the basic idea is simply to capture the most important aspects of the real problem ( sensation, action , and goal / state, action and reward) facing a learning agent interacting with its environment to achieve a goal. The formulation is intended to include the three aspects in their simplest possible forms without trivializing any of them. (Markov decision process in the most concise but essential way Presents the perception, actions and goals necessary for the agent in environmental interaction)

Elements(要素) of Reinforcement Learning

  • Agent (Agent) & Environment

  • Policy (策略)
    Roughly speaking, a policy is a mapping from perceived states of the environment to actions to be taken when in those states. In general, policies may be stochastic.

  • Reward signal (收益信号)
    A reward signal defines the goal in a reinforcement learning problem. On each time step, the environment sends to the reinforcement learning agent a single number, a r e w a r d reward reward.
    In general, reward signals may be stochastic functions of the state of the environment and the actions taken.
    The agent’s sole objective is to maximize the total reward it receives over the long run.

Reinforcement learning is based on the " reward hypothesis ": all problem-solving goals can be described as maximizing cumulative reward

  • Value function (价值函数)
    Whereas the reward signal indicates what is good in an immediate sense, a value function specifies what is good in the long run. Roughly speaking, the v a l u e value value of a state is the total amount of reward an agent can expect to accumulate over the future, starting from that state.

Action choices are made based on value judgments. We seek actions that bring about states of highest value, not highest reward, because these actions obtain the greatest amount of reward for us over the long run.

Unfortunately, it is much harder to determine values than it is to determine rewards. In fact, the most important component of almost all reinforcement learning algorithms we consider is a method for efficiently estimating values.

Notice that methods like p o l i c y policy policy g r a d i e n t gradient gradient m e t h o d s methods methods do not appeal to value functions. They estimate the directions the parameters should be adjusted in order to most rapidly improve a policy’s performance. In fact, some of these methods take advantage of value function estimates to improve their gradient estimates.

  • Model of the environment
    This is something that mimics the behavior of the environment, or more generally, that allows inferences (推断) to be made about how the environment will behave. Models
    are used for p l a n n i n g planning planning (规划), by which we mean any way of deciding on a course of action by considering possible future situations before they are actually experienced.

Methods for solving reinforcement learning problems that use models and planning are called m o d e l − b a s e d model-based modelbased methods, as opposed to simpler m o d e l − f r e e model-free modelfree methods that are explicitly trial-and-error learners

Guess you like

Origin blog.csdn.net/weixin_42437114/article/details/109187788
Recommended