Reinforcement Learning in Practice: Policy Gradient-Cart pole Game Showcase

Abstract: The agent learns in the environment, performs actions according to the state of the environment (or observed observation), and guides better actions according to the feedback reward (reward) of the environment.

This article is shared from Huawei Cloud Community " Reinforcement Learning from Basic to Advanced-Case and Practice [5.1]: Policy Gradient-Cart pole Game Show ", author: Ting.

  • Reinforcement learning (RL) is a field in machine learning that distinguishes between supervised learning and unsupervised learning, emphasizing how to act based on the environment to maximize the expected benefits.
  • Basic operation steps: the agent learns in the environment, executes actions according to the state of the environment (or observed observation), and guides better actions according to the feedback reward (reward) of the environment.

For example, in the Cart pole mini-game of this project, the agent is the pole in the animation, and the pole has two actions: left and right.

1. Introduction to Policy Gradient

  • In reinforcement learning, there are two broad categories of methods, one based on value (Value-based), one based on policy (Policy-based)
    • Typical representatives of Value-based algorithms are Q-learning and SARSA, which optimize the Q function to the optimum, and then take the optimal strategy according to the Q function.
    • A typical representative of the Policy-based algorithm is Policy Gradient, which directly optimizes the policy function.
  • Using the neural network to fit the policy function, it is necessary to calculate the policy gradient to optimize the policy network.
    • The goal of optimization is the expected return of the strategy π(s,a): the weighted sum of the return R obtained by all trajectories and the corresponding trajectory occurrence probability p, when N is large enough, it can be averaged by sampling N Episodes approximate expression.
    • The optimization target obtains the policy gradient after deriving the parameter θ:
## 安装依赖
!pip install pygame
!pip install gym
!pip install atari_py
!pip install parl
import gym
import os
import random
import collections
import paddle
import paddle.nn as nn
import numpy as np
import paddle.nn.functional as F

2. Model Model

The model here can choose different neural network components according to its own needs.

PolicyGradient is used to define the forward (Forward) network, and you can freely customize your own network structure.

class PolicyGradient(nn.Layer):
 def __init__(self, act_dim):
 super(PolicyGradient, self).__init__()
 act_dim = act_dim
        hid1_size = act_dim * 10
 self.linear1 = nn.Linear(in_features=4, out_features=hid1_size)
 self.linear2 = nn.Linear(in_features=hid1_size, out_features=act_dim)
 def forward(self, obs):
        out = self.linear1(obs)
        out = paddle.tanh(out)
        out = self.linear2(out)
        out = F.softmax(out)
 return out

3. The learning function of the agent

This includes two parts: model exploration and model training

Agent is responsible for the interaction between the algorithm and the environment. During the interaction process, the generated data is provided to Algorithm to update the model (Model). The data preprocessing process is also generally defined here.

def sample(obs, MODEL):
 global ACTION_DIM
 obs = np.expand_dims(obs, axis=0)
 obs = paddle.to_tensor(obs, dtype='float32')
    act = MODEL(obs)
 act_prob = np.squeeze(act, axis=0)
    act = np.random.choice(range(ACTION_DIM), p=act_prob.numpy())
 return act
def learn(obs, action, reward, MODEL):
 obs = np.array(obs).astype('float32')
 obs = paddle.to_tensor(obs)
 act_prob = MODEL(obs)
    action = paddle.to_tensor(action.astype('int32'))
 log_prob = paddle.sum(-1.0 * paddle.log(act_prob) * F.one_hot(action, act_prob.shape[1]), axis=1)
    reward = paddle.to_tensor(reward.astype('float32'))
    cost = log_prob * reward
    cost = paddle.sum(cost)
    opt = paddle.optimizer.Adam(learning_rate=LEARNING_RATE,
                                parameters=MODEL.parameters()) # 优化器(动态图)
 cost.backward()
 opt.step()
 opt.clear_grad()
 return cost.numpy()

4. Model gradient update algorithm

def run_train(env, MODEL):
 MODEL.train()
 obs_list, action_list, total_reward = [], [], []
 obs = env.reset()
 while True:
 # 获取随机动作和执行游戏
 obs_list.append(obs)
        action = sample(obs, MODEL) # 采样动作
 action_list.append(action)
 obs, reward, isOver, info = env.step(action)
 total_reward.append(reward)
 # 结束游戏
 if isOver:
 break
 return obs_list, action_list, total_reward
def evaluate(model, env, render=False):
 model.eval()
 eval_reward = []
 for i in range(5):
 obs = env.reset()
 episode_reward = 0
 while True:
 obs = np.expand_dims(obs, axis=0)
 obs = paddle.to_tensor(obs, dtype='float32')
            action = model(obs)
            action = np.argmax(action.numpy())
 obs, reward, done, _ = env.step(action)
 episode_reward += reward
 if render:
 env.render()
 if done:
 break
 eval_reward.append(episode_reward)
 return np.mean(eval_reward)

5. Training function and verification function

set hyperparameters

LEARNING_RATE = 0.001 # 学习率大小
OBS_DIM = None
ACTION_DIM = None
# 根据一个episode的每个step的reward列表,计算每一个Step的Gt
def calc_reward_to_go(reward_list, gamma=1.0):
 for i in range(len(reward_list) - 2, -1, -1):
 # G_t = r_t + γ·r_t+1 + ... = r_t + γ·G_t+1
 reward_list[i] += gamma * reward_list[i + 1] # Gt
 return np.array(reward_list)
def main():
 global OBS_DIM
 global ACTION_DIM
 train_step_list = []
 train_reward_list = []
 evaluate_step_list = []
 evaluate_reward_list = []
 # 初始化游戏
    env = gym.make('CartPole-v0')
 # 图像输入形状和动作维度
 action_dim = env.action_space.n
 obs_dim = env.observation_space.shape[0]
    OBS_DIM = obs_dim
    ACTION_DIM = action_dim
 max_score = -int(1e4)
 # 创建存储执行游戏的内存
    MODEL = PolicyGradient(ACTION_DIM)
    TARGET_MODEL = PolicyGradient(ACTION_DIM)
 # 开始训练
 print("start training...")
 # 训练max_episode个回合,test部分不计算入episode数量
 for i in range(1000):
 obs_list, action_list, reward_list = run_train(env, MODEL)
 if i % 10 == 0:
 print("Episode {}, Reward Sum {}.".format(i, sum(reward_list)))
 batch_obs = np.array(obs_list)
 batch_action = np.array(action_list)
 batch_reward = calc_reward_to_go(reward_list)
        cost = learn(batch_obs, batch_action, batch_reward, MODEL)
 if (i + 1) % 100 == 0:
 total_reward = evaluate(MODEL, env, render=False) # render=True 查看渲染效果,需要在本地运行,AIStudio无法显示
 print("Test reward: {}".format(total_reward))
if __name__ == '__main__':
 main()

 

W0630 11:26:18.969960 322 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0630 11:26:18.974581 322 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
start training...
Episode 0, Reward Sum 37.0.
Episode 10, Reward Sum 27.0.
Episode 20, Reward Sum 32.0.
Episode 30, Reward Sum 20.0.
Episode 40, Reward Sum 18.0.
Episode 50, Reward Sum 38.0.
Episode 60, Reward Sum 52.0.
Episode 70, Reward Sum 19.0.
Episode 80, Reward Sum 27.0.
Episode 90, Reward Sum 13.0.
Test reward: 42.8
Episode 100, Reward Sum 28.0.
Episode 110, Reward Sum 44.0.
Episode 120, Reward Sum 30.0.
Episode 130, Reward Sum 28.0.
Episode 140, Reward Sum 27.0.
Episode 150, Reward Sum 47.0.
Episode 160, Reward Sum 55.0.
Episode 170, Reward Sum 26.0.
Episode 180, Reward Sum 47.0.
Episode 190, Reward Sum 17.0.
Test reward: 42.8
Episode 200, Reward Sum 23.0.
Episode 210, Reward Sum 19.0.
Episode 220, Reward Sum 15.0.
Episode 230, Reward Sum 59.0.
Episode 240, Reward Sum 59.0.
Episode 250, Reward Sum 32.0.
Episode 260, Reward Sum 58.0.
Episode 270, Reward Sum 18.0.
Episode 280, Reward Sum 24.0.
Episode 290, Reward Sum 64.0.
Test reward: 116.8
Episode 300, Reward Sum 54.0.
Episode 310, Reward Sum 28.0.
Episode 320, Reward Sum 44.0.
Episode 330, Reward Sum 18.0.
Episode 340, Reward Sum 89.0.
Episode 350, Reward Sum 26.0.
Episode 360, Reward Sum 57.0.
Episode 370, Reward Sum 54.0.
Episode 380, Reward Sum 105.0.
Episode 390, Reward Sum 56.0.
Test reward: 94.0
Episode 400, Reward Sum 70.0.
Episode 410, Reward Sum 35.0.
Episode 420, Reward Sum 45.0.
Episode 430, Reward Sum 117.0.
Episode 440, Reward Sum 50.0.
Episode 450, Reward Sum 35.0.
Episode 460, Reward Sum 41.0.
Episode 470, Reward Sum 43.0.
Episode 480, Reward Sum 75.0.
Episode 490, Reward Sum 37.0.
Test reward: 57.6
Episode 500, Reward Sum 40.0.
Episode 510, Reward Sum 85.0.
Episode 520, Reward Sum 86.0.
Episode 530, Reward Sum 30.0.
Episode 540, Reward Sum 68.0.
Episode 550, Reward Sum 25.0.
Episode 560, Reward Sum 82.0.
Episode 570, Reward Sum 54.0.
Episode 580, Reward Sum 53.0.
Episode 590, Reward Sum 58.0.
Test reward: 147.2
Episode 600, Reward Sum 24.0.
Episode 610, Reward Sum 78.0.
Episode 620, Reward Sum 62.0.
Episode 630, Reward Sum 58.0.
Episode 640, Reward Sum 50.0.
Episode 650, Reward Sum 67.0.
Episode 660, Reward Sum 68.0.
Episode 670, Reward Sum 51.0.
Episode 680, Reward Sum 36.0.
Episode 690, Reward Sum 69.0.
Test reward: 84.2
Episode 700, Reward Sum 34.0.
Episode 710, Reward Sum 59.0.
Episode 720, Reward Sum 56.0.
Episode 730, Reward Sum 72.0.
Episode 740, Reward Sum 28.0.
Episode 750, Reward Sum 35.0.
Episode 760, Reward Sum 54.0.
Episode 770, Reward Sum 61.0.
Episode 780, Reward Sum 32.0.
Episode 790, Reward Sum 147.0.
Test reward: 123.0
Episode 800, Reward Sum 129.0.
Episode 810, Reward Sum 65.0.
Episode 820, Reward Sum 73.0.
Episode 830, Reward Sum 54.0.
Episode 840, Reward Sum 60.0.
Episode 850, Reward Sum 71.0.
Episode 860, Reward Sum 54.0.
Episode 870, Reward Sum 74.0.
Episode 880, Reward Sum 34.0.
Episode 890, Reward Sum 55.0.
Test reward: 104.8
Episode 900, Reward Sum 41.0.
Episode 910, Reward Sum 111.0.
Episode 920, Reward Sum 33.0.
Episode 930, Reward Sum 49.0.
Episode 940, Reward Sum 62.0.
Episode 950, Reward Sum 114.0.
Episode 960, Reward Sum 52.0.
Episode 970, Reward Sum 64.0.
Episode 980, Reward Sum 94.0.
Episode 990, Reward Sum 90.0.
Test reward: 72.2

The project link can be run after fork.

 

Click to follow and learn about Huawei Cloud's fresh technologies for the first time~

Graduates of the National People’s University stole the information of all students in the school to build a beauty scoring website, and have been criminally detained. The new Windows version of QQ based on the NT architecture is officially released. The United States will restrict China’s use of Amazon, Microsoft and other cloud services that provide training AI models . Open source projects announced to stop function development LeaferJS , the highest-paid technical position in 2023, released: Visual Studio Code 1.80, an open source and powerful 2D graphics library , supports terminal image functions . The number of Threads registrations has exceeded 30 million. "Change" deepin adopts Asahi Linux to adapt to Apple M1 database ranking in July: Oracle surges, opening up the score again
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4526289/blog/10086269