When converting Tensor to ndarray in Tensorflow, the run or eval function is called continuously in the loop, and the code runs slower and slower!

question

  I have a requirement like this: I currently have a trained encoder model, and its output is Tensora type, and I want to convert it into ndarraya type. By querying the data, I found that I can use the sess.run()conversion Tensorto , so I successfully converted the data type ndarrayby calling it in my code .   However, my data conversion will be called in every loop, that is, it will be called all the time in the loop, so the problem arises, each loop takes longer than the previous one, which makes the subsequent training slower and slower. From 0.17s for the first call to 0.27s for the 100th call later, and this is only 100 times, if you train 10,000 times, you don’t know how long it will take, so this problem must be solved!sess.run()
sess.run()sess.run

problem causes

  If tensorflow graph nodes are continuously created in a certain cycle and then run, it will cause tensorflow to run slower and slower. For specific questions , please refer to the code comments . You can ignore the code lines without comments. The problem code is as follows:

import gym
from gym.spaces import Box
import numpy as np
from tensorflow import keras
import tensorflow as tf
import time

class MyWrapper(gym.ObservationWrapper):
    def __init__(self, env, encoder, latent_dim = 2):
        super().__init__(env)
        self._observation_space = Box(-np.inf, np.inf, shape=(7 + latent_dim,), dtype=np.float32)
        self.observation_space = self._observation_space
        self.encoder = encoder # 这是我已经提前训练好的模型
        tf.InteractiveSession()
        self.sess = tf.get_default_session()
		self.sess.run(tf.global_variables_initializer())
	
    def observation(self, obs):
        obs = np.reshape(obs, (1, -1))
        latent_z_tensor = self.encoder(obs)[2] # 问题就在与这里,这行代码在调用run时,会不断的创建图节点,所以越来越慢
        
        t=time.time() # 测试运行用时
        latent_z_arr = sels.sess.run(latent_z_tensor) # 每次run时,就会把上面的图重新构建一次
        print(time.time()-t) # 测试运行用时

        obs = np.reshape(obs, (-1,))
        latent_z_arr = np.reshape(latent_z_arr, (-1,))

        obs = obs.tolist()
        obs.extend(latent_z_arr.tolist())
        obs = np.array(obs)
        return obs

Solutions

At the time of initialization, the graph structure is established, and the variable tf.placeholderis represented by a placeholder obs. The specific scheme example is as follows (you can only focus on the lines with comments):

import gym
from gym.spaces import Box
import numpy as np
from tensorflow import keras
import tensorflow as tf
import time

class MyWrapper(gym.ObservationWrapper):
    def __init__(self, env, encoder, latent_dim = 2):
        super().__init__(env)
        self._observation_space = Box(-np.inf, np.inf, shape=(7 + latent_dim,), dtype=np.float32)
        self.observation_space = self._observation_space
        self.encoder = encoder
        tf.InteractiveSession()
        self.sess = tf.get_default_session()
        self.obs=tf.placeholder(dtype=tf.float32,shape=(1,7)) # 重点在于这两行代码,初始化时先构建好图,先用占位符表示obs,实际运行时只需喂数据obs就好了
        self.latent_z_tensor = self.encoder(self.obs)[2] # 在初始化时构建图
        self.sess.run(tf.global_variables_initializer())

    def observation(self, obs):
        obs = np.reshape(obs, (1, -1))
        t=time.time() # 测试运行用时
        latent_z_arr = self.sess.run(self.latent_z_tensor, feed_dict={
    
    self.obs:obs}) # 这里只需喂数据,不会重新构建图了。
        print(time.time()-t) # 测试运行用时

        obs = np.reshape(obs, (-1,))
        latent_z_arr = np.reshape(latent_z_arr, (-1,))

        obs = obs.tolist()
        obs.extend(latent_z_arr.tolist())
        obs = np.array(obs)
        return obs

Now, the data type conversion is completed, and the slow running of the code is also solved!

Guess you like

Origin blog.csdn.net/m0_59019651/article/details/125133422