TensorFlow2.0 Tutorial 30: Using tf.function improve code performance and AutoGraph

  In TensorFlow 2.0, the next eagerly enabled by default. For users an intuitive and flexible (run-time operation easier and faster), but may sacrifice performance and deployability.

  To obtain optimum performance and deployment models may be anywhere, you can be preferably used from the program to build tf.function FIG. Because there AutoGraph, you can use tf.function build efficient performance of Python code, but there are some pitfalls to watch out.

  Today we'll introduce TF fuction and AutoGraph tensorflow2.0 in.

  The following auxiliary program code used to demonstrate a variety of errors that may be encountered.

  import contextlib

  # Construction function contains context manager so that it can be used with the

  @contextlib.contextmanager

  def assert_raises(error_class):

  try:

  yield

  except error_class as e:

  print('Caught expected exception \n {}: {}'.format(error_class, e))

  except Exception as e:

  print('Got unexpected exception \n {}: {}'.format(type(e), e))

  else:

  raise Exception('Expected {} to be raised but no error was raised!'.format(

  error_class))

  tf.function

  It is defined as a core tf.function TensorFlow operation: it may be urgently performed; it can also be used in static figures; and it has a gradient.

  # Tensorflow a similar operation

  @tf.function

  def add(a, b):

  return a+b

  add(tf.ones([2,2]), tf.ones([2,2]))

  array([[2., 2.],

  [2., 2.]], dtype=float32)>

  Operation may calculate the gradient # tf.function

  @tf.function

  def add(a, b):

  return a+b

  v = tf.Variable(2.0)

  with tf.GradientTape() as tape:

  res = add(v, 1.0)

  tape.gradient(res, v)

  # Can be embedded call tf.function

  @tf.function

  def dense_layer(x, w, b):

  return add(tf.matmul(x, w), b)

  dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))

  array([[3., 3.],

  [3., 3.],

  [3., 3.]], dtype=float32)>

  Tracking and polymorphism

  Python dynamic type parameters of various types of means can be used to call the function, Python will perform different operations in each scene.

  On the other hand, TensorFlow FIG dtypes requires a static size and shape. tf.function to close this gap by backtracking function when necessary to generate the correct view of the structure. Most tf.function derived from the use of this regression behavior.

  We can use different types of arguments to the call to see what is happening.

  Polymorphic # Functions

  @tf.function

  def double(a):

  print ( 'tracking variables:', a)

  return a + a

  print ( 'results:', double (tf.constant (1)))

  print()

  print ( 'results:', double (tf.constant (1.1)))

  print()

  print ( 'results:', double (tf.constant ( 'c')))

  print()

  Tracking variables: Tensor ( "a: 0", shape = (), dtype = int32)

  Results: tf.Tensor (2, shape = (), dtype = int32)

  Tracking variables: Tensor ( "a: 0", shape = (), dtype = float32)

  Results: tf.Tensor (2.2, shape = (), dtype = float32)

  Tracking variables: Tensor ( "a: 0", shape = (), dtype = string)

  结果: tf.Tensor(b'cc', shape=(), dtype=string)

  Control parameters Type:

  Create a new tf.function. tf.function ensure individual objects do not share the track.

  Using this method to get a specific track get_concrete_function

  Specify when input_signature call tf.function to ensure that the building is only a function diagram.

  print ( 'building permit tracking')

  double_strings = double.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.string))

  print ( "execution trace function")

  print(double_strings(tf.constant("a")))

  print(double_strings(a=tf.constant("b")))

  print ( "illegal use parameter")

  with assert_raises(tf.errors.InvalidArgumentError):

  double_strings(tf.constant(1))

  Construction of the track license

  Tracking Variables: Tensor ( "a: 0", dtype = string)

  Execution Trace functions

  tf.Tensor(b'aa', shape=(), dtype=string)

  tf.Tensor(b'bb', shape=(), dtype=string)

  Illegal use parameters

  Caught expected exception

  : cannot compute __inference_double_98 as input #0(zero-based) was expected to be a string tensor but is a int32 tensor [Op:__inference_double_98]

  @tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))

  def next_collatz(x):

  print("Tracing with", x)

  return tf.where(tf.equal(x % 2, 0), x // 2, 3 * x + 1)

  print(next_collatz(tf.constant([1, 2])))

  # You can only enter one-dimensional vector

  with assert_raises(ValueError):

  next_collatz(tf.constant([[1, 2], [3, 4]]))

  Tracing with Tensor("x:0", shape=(None,), dtype=int32)

  tf.Tensor([4 1], shape=(2,), dtype=int32)

  Caught expected exception

  : Python inputs incompatible with input_signature: inputs ((

  array([[1, 2],

  [3, 4]], dtype=int32)>,)), input_signature ((TensorSpec(shape=(None,), dtype=tf.int32, name=None),))

  When backtracking?

  Polymorphic tf.function particular function to generate the cache by tracking. Cache key is actually generated from the function and args kwargs key tuple. The key parameter is generated for tf.Tensor its shape and type. Python primitives generated for the key is its value. For all other types Python, are based on the object key, ID () is an example of an independent tracking method for each class. In the future, TensorFlow can add more complex cache Python objects can be safely converted to a tensor.

  Using Python parameter or parameters Tensor?

  Typically, Python parameters are used to control the super-structure and parameters - e.g., num_layers = 10 or training = True or nonlinearity = 'relu'. Therefore, if the Python parameters change, it must backtrack map.

  However, Python parameters may not be used to control the configuration of FIG. In these cases, changes in the value of Python may trigger unnecessary backtracking. For example, the training cycle, AutoGraph dynamically expanded. Although there are a plurality of traces, but the resulting FIG practically identical, thus it somewhat inefficient.

  def train_one_step():

  pass

  @tf.function

  def train(num_steps):

  print("追踪: num_steps = {}".format(num_steps))

  for _ in tf.range(num_steps):

  train_one_step()

  train(num_steps=10)

  train(num_steps=20)

  Track: num_steps = 10

  Track: num_steps = 20

  # Use the tensor, will not repeat the same type of track

  train(num_steps=tf.constant(10))

  train(num_steps=tf.constant(20))

  追踪: num_steps = Tensor("num_steps:0", shape=(), dtype=int32)

  # Use the tensor, different types will have a new tracking, (the former has been tracking a cell type int, so there does not track)

  train(num_steps=tf.constant(10, dtype=tf.int32))

  train(num_steps=tf.constant(20.6))

  追踪: num_steps = Tensor("num_steps:0", shape=(), dtype=float32)

  Side effects tf.function

  Typically, Python side effects (such as printing or variant object) occurs only during the tracking. How can reliably trigger side effects tf.function it?

  General rule of thumb is to use only the Python side effects to debug tracing. However, TensorFlow operation is similar to tf.Variable.assign, tf.print and tf.summary is to ensure TensorFlow run, the best way to track and execute code on each call. Style commonly used functions will produce the best results.

  tf.function print function in () is used to track, so to debug output each call (side effects), you need to tf.function ()

  @tf.function

  def f(x):

  print ( "track:", x)

  tf.print ( 'execute:', x)

  f(1)

  f(1)

  f(2)

  Tracking: 1

  Execution: 1

  Execution: 1

  Tracking: 2

  Execution: 2

  If you want to execute Python code tf.function during each call, you can use tf.py_function. tf.py_function disadvantage that it is not portable and efficient, it does not work well in distributed (multiple GPU, TPU) settings. Further, since the tf.py_function must be connected to view it all input / output conversion tensor.

  external_list = []

  def side_effect(x):

  print('Python side effect')

  external_list.append(x)

  @tf.function

  def f(x):

  tf.py_function(side_effect, inp=[x], Tout=[])

  f(1)

  f(1)

  f(1)

  print(external_list)

  WARNING: Logging before flag parsing goes to stderr.

  W0609 06:41:05.048375 139792217777920 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32

  W0609 06:41:05.053524 139792217777920 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32

  W0609 06:41:05.056409 139792226170624 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32

  Python side effect

  Python side effect

  Python side effect

  [, , ]

  Beware of state Python

  Many Python functions (such as generation and iterators) depends on the state to track the Python runtime. Usually, though these structures in Eager mode works as expected, but due to the tracking behavior, internal tf.function may be many accidents from happening.

  external_var = tf.Variable(0)

  @tf.function

  def buggy_consume_next(iterator):

  external_var.assign_add(next(iterator))

  tf.print ( 'external_var', external_var)

  iterator path = ([0,1,2,3])

  buggy_consume_next(iterator)

  No normal iteration, the output of the first all behind #

  buggy_consume_next(iterator)

  buggy_consume_next(iterator)

  external_var: 0

  external_var: 0

  external_var: 0

  If you build in tf.function and fully use the iterator, then it should work. However, throughout the iterator it might be tracked, which could lead to a huge map. If you are training represented as a Python list of large memory data sets, then this will generate a very large map, and tf.function unlikely to have accelerated.

  If you want iteration Python data, the safest method is to wrap it in tf.data.Dataset and use that for x in y idiom. Conversion cycle safely when AutoGraph special support tf.data.Dataset.

  def measure_graph_size(f, *args):

  g = f.get_concrete_function(*args).graph

  print ( "{} ({}) of FIG {} contains nodes" .format (

  f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))

  @tf.function

  def train(dataset):

  loss = tf.constant(0)

  for x, y in dataset:

  loss += tf.abs(y - x) # Some dummy computation.

  return loss

  small_data = [(1, 1)] * 2

  big_data = [(1, 1)] * 10

  measure_graph_size(train, small_data)

  measure_graph_size(train, big_data)

  measure_graph_size(train, tf.data.Dataset.from_generator(

  lambda: small_data, (tf.int32, tf.int32)))

  measure_graph_size(train, tf.data.Dataset.from_generator(

  lambda: big_data, (tf.int32, tf.int32)))

  train ([(1, 1), (1, 1)]) in FIG. 8 contains a node

  train ([(1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1) , (1, 1), (1, 1)]) of FIG. 32 contains nodes

  train (,), types: (tf.int32, tf.int32)>) in FIG. 4 contains nodes

  train (,), types: (tf.int32, tf.int32)>) in FIG. 4 contains nodes

  When the data set packaging Python / Numpy data, please note tf.data.Dataset.from_generator and tf.data.Dataset.from_tensors. The former data stored in Python and it gets through tf.py_function impact performance, while the latter is the copy of the data bundle drawing a large tf.constant () node, which may have an impact on memory.

  / Etc by reading data from the file TFRecordDataset / CsvDataset. Data processing is the most effective way, since TensorFlow itself can manage the asynchronous load and prefetch data, without involving Python.

  Automatic control dependencies

  In a general data flow diagram as a function of the programming model is a very attractive feature function may provide more information about the expected behavior of the code is running.

  For example, when writing the code having a plurality of read and write the same variables, data flow diagrams may not naturally encoded originally intended operation sequence. In tf.function, we have to resolve the ambiguity in the order of execution by the execution order references the original Python code statement. Thus, sort operations ordered state tf.function copy semantics Eager mode.

  This means that no manual control to add a dependency; tf.function smart enough, you can add the minimum necessary and sufficient control dependencies is the code to run properly.

  # Automatically executed in the order

  a = tf.Variable(1.0)

  b = tf.Variable(2.0)

  @tf.function

  def f(x, y):

  a.assign(y * b)

  b.assign_add(x * a)

  return a + b

  f(1.0, 2.0)

  variable

  We can use the same idea to take advantage of the expected order of execution of code, the variables to create and use very easily tf.function. But there is a very important warning that the use of variables, you can write in eager mode and graphics mode, the performance of different codes.

  In particular, this situation occurs when you create a new variable each call. Since the tracking semantics, tf.function each call will reuse the same variable, but eager mode will create a new variable in each call. To prevent this error, tf.function create a variable if it detects dangerous behavior, an error is thrown.

  @tf.function

  def f(x):

  # Tf.function repeats call the same variable, and eager every time you create a new variable

  v = tf.Variable(1.0)

  v.assign_add(x)

  return v

  with assert_raises(ValueError):

  f(1.0)

  Caught expected exception

  : in converted code:

  :4 f *

  v = tf.Variable(1.0)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:262 __call__

  return cls._variable_v2_call(*args, **kwargs)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:256 _variable_v2_call

  shape=shape)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:60 getter

  return captured_getter(captured_previous, **kwargs)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:364 invalid_creator_scope

  "tf.function-decorated function tried to create "

  ValueError: tf.function-decorated function tried to create variables on non-first call.

  The method is not being given

  v = tf.Variable (1.0) # tf.function get out of the variable

  @tf.function

  def f(x):

  return v.assign_add(x)

  print(f(1.0)) # 2.0

  print(f(2.0)) # 4.0

  tf.Tensor(2.0, shape=(), dtype=float32)

  tf.Tensor(4.0, shape=(), dtype=float32)

  You can also create a variable in tf.function, as long as you can guarantee that these variables are created only when the first execution of the function.

  class C: pass

  obj = C(); obj.v = None

  @tf.function

  def g(x):

  if obj.v is None:

  obj.v = tf.Variable(1.0)

  return obj.v.assign_add(x)

  print(g(1.0)) # 2.0

  print(g(2.0)) # 4.0

  tf.Tensor(2.0, shape=(), dtype=float32)

  tf.Tensor(4.0, shape=(), dtype=float32)

  Variable initializer function may be dependent on the parameter values ​​and other variables. We can use the same method to generate control dependencies to find the correct initialization sequence.

  state = []

  @tf.function

  def fn(x):

  if not state:

  state.append(tf.Variable(2.0 * x))

  state.append(tf.Variable(state[0] * 3.0))

  return state[0] * x * state[1]

  print(fn(tf.constant(1.0)))

  print(fn(tf.constant(3.0)))

  tf.Tensor(12.0, shape=(), dtype=float32)

  tf.Tensor(36.0, shape=(), dtype=float32)

  Use AutoGraph

  The signature database is fully integrated tf.function, it will rewrite cycles depend on the conditions and tensor dynamic run in graphics.

  tf.cond and tf.while_loop continue tf.function, but when the command written in formula form, typically having a flow control code easier to write and understand.

  # Simple loop

  @tf.function

  def f(x):

  # Write directly in python while loop

  while tf.reduce_sum(x) > 1:

  tf.print(x)

  x = tf.tanh(x)

  return x

  f(tf.random.uniform([5]))

  [0.829342961 0.858322263 0.900950909 0.851897 0.530384183]

  [0.680123031 0.695392191 0.716760576 0.692059278 0.485674709]

  [0.591599405 0.601434886 0.614898741 0.599303305 0.450776756]

  [0.53104496 0.538069844 0.547566235 0.536553681 0.422537297]

  [0.486179501 0.491525501 0.498693913 0.490374774 0.399065822]

  [0.451178908 0.455426365 0.461089343 0.454513818 0.379149348]

  [0.422867566 0.426349223 0.430971652 0.425602287 0.361968517]

  [0.399343461 0.402265817 0.406133026 0.401639521 0.346946776]

  [0.379387051 0.381885976 0.385184318 0.381350905 0.333665]

  [0.362175018 0.36434418 0.367201209 0.363880038 0.321810097]

  [0.347128421 0.349034756 0.351541221 0.348627061 0.311142713]

  [0.333826423 0.335519224 0.337741673 0.335157365 0.30147627]

  [0.321954757 0.323471278 0.325459719 0.323147237 0.292663]

  [0.311273336 0.312642276 0.314435244 0.312349856 0.284584]

  [0.301595032 0.302838922 0.304466605 0.302573323 0.277142316]

  [0.292771578 0.293908447 0.295394808 0.293665737 0.270258158]

  [0.284683794 0.285728157 0.287092626 0.285505235 0.263865024]

  [0.277234435 0.278198302 0.279456645 0.277992576 0.257907033]

  [0.270343572 0.271236718 0.272402078 0.271046132 0.25233686]

  [0.263944477 0.264775217 0.265858531 0.264597982 0.247114092]

  [0.257981181 0.258756459 0.259766966 0.258591145 0.242203966]

  [0.252406299 0.253132015 0.254077554 0.252977312 0.237576365]

  [0.24717927 0.247860536 0.248747766 0.247715324 0.233205199]

  [0.242265314 0.242906466 0.24374117 0.242769822 0.229067564]

  [0.237634286 0.238239139 0.239026278 0.238110229 0.225143358]

  [0.233259991 0.233831868 0.234575793 0.233709976 0.221414775]

  [0.229119495 0.229661271 0.230365857 0.229545817 0.217866093]

  [0.225192651 0.22570689 0.22637549 0.225597292 0.214483246]

  [0.221461684 0.221950635 0.222586185 0.221846417 0.211253688]

  [0.217910782 0.218376443 0.218981609 0.218277216 0.208166167]

  [0.214525893 0.214970052 0.215547174 0.214875415 0.205210552]

  [0.211294428 0.211718708 0.212269917 0.211628318 0.202377662]

  [0.208205134 0.208611 0.209138155 0.20852454 0.199659243]

  [0.205247864 0.205636591 0.206141427 0.2055538 0.197047815]

  [0.20241344 0.202786222 0.203270242 0.202706844 0.194536477]

  array([0.19969359, 0.2000515 , 0.2005161 , 0.19997531, 0.192119 ],

  dtype=float32)>

  print(f)

  You can check the code signature generation. But it feels like reading assembly language.

  def f(x):

  while tf.reduce_sum(x) > 1:

  tf.print(x)

  x = tf.tanh(x)

  return x

  print(tf.autograph.to_code(f))

  def tf__f(x):

  do_return = False

  retval_ = ag__.UndefinedReturnValue()

  def loop_test(x_1):

  return ag__.converted_call('reduce_sum', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (x_1,), None) > 1

  def loop_body(x_1):

  ag__.converted_call('print', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (x_1,), None)

  x_1 = ag__.converted_call('tanh', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (x_1,), None)

  return x_1,

  x, = ag__.while_stmt(loop_test, loop_body, (x,))

  do_return = True

  retval_ = x

  cond = ag__.is_undefined_return(retval_)

  def get_state():

  return ()

  def set_state(_):

  pass

  def if_true():

  retval_ = None

  return retval_

  def if_false():

  return retval_

  retval_ = ag__.if_stmt(cond, if_true, if_false, get_state, set_state)

  return retval_

  AutoGraph: Conditions

  AutoGraph if statement will be converted to an equivalent tf.cond call.

  If the condition is Tensor, this replacement is performed. Otherwise, conditions during tracking.

  Test #

  def test_tf_cond(f, *args):

  # Get map

  g = f.get_concrete_function(*args).graph

  if any(node.name=='cond' for node in g.as_graph_def().node):

  print("{}({}) 使用 tf.cond.".format(

  f.__name__, ', '.join(map(str, args))))

  else:

  print ( "{} ({}) normal execution.". format (

  f.__name__, ', '.join(map(str, args))))

  Only conditions for tensor, will be used tf.cond

  @tf.function

  def hyperparam_cond(x, training=True):

  if training:

  x = tf.nn.dropout(x, rate=0.5)

  return x

  @tf.function

  def maybe_tensor_cond(x):

  if x < 0:

  x = -x

  return x

  test_tf_cond(hyperparam_cond, tf.ones([1], dtype=tf.float32))

  test_tf_cond (maybe_tensor_cond, tf.constant (-1)) # conditions tensor

  test_tf_cond(maybe_tensor_cond, -1)

  hyperparam_cond (tf.Tensor ([1.], shape = (1,), dtype = float32)) executed normally.

  maybe_tensor_cond(tf.Tensor(-1, shape=(), dtype=int32)) 使用 tf.cond.

  maybe_tensor_cond (-1) executed normally.

  tf.cond there are some nuances. - it works both sides of the tracking condition, and then select the appropriate branch conditions at run time. Tracking both sides could lead to accidental execute Python code - it requires tensor if a branch is created for downstream use, it must also create another branch of the tensor.

  @tf.function

  def f():

  x = tf.constant(0)

  if tf.constant(True):

  x = x + 1

  tf.print ( 'execution, x:', x)

  print("Tracing `then` branch")

  else:

  x = x - 1

  tf.print ( 'execution, x:', x) # is not performed

  print ( "Tracing` else` branch ") # While the branch is not executed but also be traced

  return x

  f()

  Tracing `then` branch

  Tracing `else` branch

  Execution, x: 1

  Two branches must define x

  @tf.function

  def f():

  if tf.constant(True):

  x = tf.ones([3, 3])

  return x

  # Two branches must be defined x, otherwise it will throw an exception

  with assert_raises(ValueError):

  f()

  Caught expected exception

  : in converted code:

  :3 f *

  if tf.constant(True):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:439 if_stmt

  return tf_if_stmt(cond, body, orelse, get_state, set_state)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:456 tf_if_stmt

  outputs, final_state = control_flow_ops.cond(cond, body, orelse)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py:507 new_func

  return func(*args, **kwargs)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py:1147 cond

  return cond_v2.cond_v2(pred, true_fn, false_fn, name)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/cond_v2.py:86 cond_v2

  op_return_value=pred)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:716 func_graph_from_py_func

  func_outputs = python_func(*func_args, **func_kwargs)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:486 wrapper

  outputs = func()

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:512 wrapper

  tuple(s.symbol_name for s in undefined)))

  ValueError: The following symbols must also be initialized in the else branch: ('x',). Alternatively, you may initialize them before the if statement.

  AutoGraph and recycling

  AutoGraph There are some simple rules of conversion cycle.

  for: If iterable is tensor, the conversion

  while: If while conditions depend tensor, the conversion

  If the loop is converted, it will be dynamically expanded tf.while_loop, or in a special case of for x in tf.data.Dataset convert tf.data.Dataset.reduce.

  If no conversion cycle, it will still expand

  Test #

  def test_dynamically_unrolled(f, *args):

  g = f.get_concrete_function(*args).graph

  if any(node.name == 'while' for node in g.as_graph_def().node):

  print("{}({}) uses tf.while_loop.".format(

  f.__name__, ', '.join(map(str, args))))

  elif any(node.name == 'ReduceDataset' for node in g.as_graph_def().node):

  print("{}({}) uses tf.data.Dataset.reduce.".format(

  f.__name__, ', '.join(map(str, args))))

  else:

  print("{}({}) gets unrolled.".format(

  f.__name__, ', '.join(map(str, args))))

  @tf.function

  def for_in_range():

  x = 0

  for i in range(5):

  x += i

  return x

  @tf.function

  def for_in_tfrange():

  x = tf.constant(0, dtype=tf.int32)

  for i in tf.range (5): # generate iterative tensor

  x += i

  return x

  @tf.function

  def for_in_tfdataset():

  x = tf.constant(0, dtype=tf.int64)

  for i in tf.data.Dataset.range(5):

  x += i

  return x

  test_dynamically_unrolled(for_in_range)

  test_dynamically_unrolled(for_in_tfrange)

  test_dynamically_unrolled(for_in_tfdataset)

  for_in_range() gets unrolled.

  for_in_tfrange() uses tf.while_loop.

  for_in_tfdataset() uses tf.data.Dataset.reduce.

  @tf.function

  def while_py_cond():

  x = 5

  while x > 0:

  x -= 1

  return x

  @tf.function

  def while_tf_cond():

  x = tf.constant(5)

  while x> 0: # while the x is the tensor

  x -= 1

  return x

  test_dynamically_unrolled(while_py_cond)

  test_dynamically_unrolled(while_tf_cond)

  while_py_cond() gets unrolled.

  while_tf_cond() uses tf.while_loop.

  If there is a break or early return clause depends on the tensor, then top condition or iterable should be a tensor.

  @tf.function

  def buggy_while_py_true_tf_break(x):

  while True:

  if tf.equal(x, 0):

  break

  x -= 1

  return x

  @tf.function

  def while_tf_true_tf_break(x):

  while tf.constant (True): # There are break, top conditions must tensor

  if tf.equal(x, 0):

  break

  x -= 1

  return x

  with assert_raises(TypeError):

  test_dynamically_unrolled(buggy_while_py_true_tf_break, 5)

  test_dynamically_unrolled(while_tf_true_tf_break, 5)

  Caught expected exception

  : in converted code:

  :3 buggy_while_py_true_tf_break *

  while True: Wuxi crowd Hospital Which is better http://www.ytsg029.com/

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:313 while_stmt

  return _py_while_stmt(test, body, init_state, opts)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:401 _py_while_stmt

  while test(*state):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:698 __bool__

  raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "

  TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.

  while_tf_true_tf_break(5) uses tf.while_loop.

  @tf.function

  def buggy_py_for_tf_break():

  x = 0

  for i in range(5):

  if tf.equal(i, 3):

  break

  x += i

  return x

  @tf.function

  def tf_for_tf_break():

  x = 0

  for i in tf.range (5): # There are break, iterators must be a top tensor

  if tf.equal(i, 3):

  break

  x += i

  return x

  with assert_raises(TypeError):

  test_dynamically_unrolled(buggy_py_for_tf_break)

  test_dynamically_unrolled(tf_for_tf_break)

  Caught expected exception

  : in converted code:

  :4 buggy_py_for_tf_break *

  for i in range(5):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:110 for_stmt

  return _py_for_stmt(iter_, extra_test, body, init_state)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:117 _py_for_stmt

  if extra_test is not None and not extra_test(*state):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:698 __bool__

  raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "

  TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.

  tf_for_tf_break() uses tf.while_loop.

  For the cumulative result of the dynamic expansion cycle requires the use of tf.TensorArray.

  # To achieve a dynamic rnn

  batch_size = 32

  seq_len = 3

  feature_size=4

  # Rnn step, input and state superposition

  def rnn_step(inputs, state):

  return inputs + state

  @tf.function

  def dynamic_rnn(rnn_step, input_data, initial_state):

  # [batch, time, features] -> [time, batch, features]

  input_data = tf.transpose (input_data, [1, 0, 2]) # each time dimension, the entire batch of data are fed

  max_seq_len = input_data.shape[0]

  # Save the state of the loop, you must use tf.TensorArray

  states = tf.TensorArray(tf.float32, size=max_seq_len)

  state = initial_state

  # Iteration time step

  for i in tf.range(max_seq_len):

  state = rnn_step(input_data[i], state)

  states = states.write(i, state)

  # Re change to the front of the batch_size

  return tf.transpose(states.stack(), [1, 0, 2])

  dynamic_rnn (rnn_step,

  tf.random.uniform([batch_size, seq_len, feature_size]),

  tf.zeros([batch_size, feature_size]))

  array([[[0.42647886, 0.73600817, 0.10211909, 0.89989746],

  [0.772506 , 1.6853498 , 0.48793948, 1.4499462 ],

  [1.1096102 , 2.3388233 , 0.5920907 , 1.588302 ]],

  ...

  [[0.15579033, 0.4594922 , 0.17970431, 0.19183934],

  [0.19597077, 0.5362154 , 0.19988954, 0.38290274],

  [0.7524748 , 1.0519221 , 0.76595306, 0.5257962 ]]], dtype=float32)>

  At the same time tf.cond, tf.while_loop also with some of the nuances. - Since the loop can be executed zero, so must all tensor initialization cycle used in the above while_loop downstream - with each iteration must shape / dtypes all consistent with the loop variable

  @tf.function

  def buggy_loop_var_uninitialized():

  for i in tf.range(3):

  x = i # x must be initialized good over the cycle

  return x

  @tf.function

  def f():

  x = tf.constant(0)

  for i in tf.range(3):

  x = i

  return x

  with assert_raises(ValueError):

  buggy_loop_var_uninitialized()

  f()

  Caught expected exception

  : in converted code:

  :3 buggy_loop_var_uninitialized *

  for i in tf.range(3):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:95 for_stmt

  return _known_len_tf_for_stmt(iter_, extra_test, body, init_state)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:125 _known_len_tf_for_stmt

  _disallow_undefs_into_loop(*init_state)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:50 _disallow_undefs_into_loop

  tuple(s.symbol_name for s in undefined)))

  ValueError: TensorFlow requires that the following symbols must be defined before the loop: ('x',)

  When the loop variable type can not be changed

  @tf.function

  def buggy_loop_type_changes():

  x = tf.constant(0, dtype=tf.float32)

  for i in tf.range(3): # Yields tensors of type tf.int32...

  x = i

  return x

  with assert_raises(tf.errors.InvalidArgumentError):

  buggy_loop_type_changes()

  Caught expected exception

  : Input 1 of node while/merge/_10 was passed int32 from while/next_iteration/_28:0 incompatible with expected float. [Op:__inference_buggy_loop_type_changes_2119]

  When the loop variable does not alter the shape

  @tf.function

  def buggy_concat():

  x = tf.ones([0, 10])

  for i in tf.range(5):

  x = tf.concat ([x, tf.ones ([1, 10])], axis = 0) when not change the shape of the loop variable #

  return x

  with assert_raises(ValueError):

  buggy_concat()

  @tf.function

  def concat_with_padding():

  x = tf.zeros([5, 10])

  for i in tf.range(5):

  x = tf.concat([x[:i], tf.ones([1, 10]), tf.zeros([4-i, 10])], axis=0)

  x.set_shape([5, 10])

  return x

  concat_with_padding()

  Caught expected exception

  : in converted code:

  :4 buggy_concat *

  for i in tf.range(5):

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:95 for_stmt

  return _known_len_tf_for_stmt(iter_, extra_test, body, init_state)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:156 _known_len_tf_for_stmt

  opts=dict(maximum_iterations=n))

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/operators/control_flow.py:327 _tf_while_stmt

  retval = control_flow_ops.while_loop(test, body, init_state, **opts)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py:2646 while_loop

  return_same_structure=return_same_structure)

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/while_v2.py:213 while_loop

  len_orig_loop_vars], expand_composites=True))

  /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/while_v2.py:869 _check_shapes_compat

  "specify a less-specific shape." % (input_t.name, shape, t.shape))

  ValueError: Input tensor 'ones:0' enters the loop with shape (0, 10), but has shape (1, 10) after one iteration. To allow the shape to vary across iterations, use the `shape_invariants` argument of tf.while_loop to specify a less-specific shape.

  array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],

  [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],

  [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],

  [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],

  [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)>

Guess you like

Origin www.cnblogs.com/gnz49/p/11592195.html