TensorFlow 2.2.0 released a new release candidate. TensorFlow 2.2.0 Key features and improvements include the following:
- String tensor scalar type from
std::string
replacedtensorflow::tstring
- TF new Profiler 2 for CPU / GPU / TPU. It provides a performance analysis device and a host, comprising an input conduit and TF Ops.
- Not recommended for use SWIG, but uses pybind11 to export C ++ functions to Python, which is part of an effort abandoned Swig.
tf.distribute
:- By using tf.keras.layers.experimental.SyncBatchNormalization newly added layer, adding to global synchronization BatchNormalization support. This layer will be synchronized BatchNormalization statistical information between all copies participate in synchronization training.
- Tf.distribute.experimental.MultiWorkerMirroredStrategy improve performance multi-GPU distributed training
- The NVIDIA NCCL update to 2.5.7-1, for better performance and performance tuning.
- Support reduce the gradient float16.
- All experiments have reduced gradient compression support, to allow the reverse path superimposed gradient calculation polymerization.
tf.keras
:Model.fit
The major improvements:- It may be used in conjunction with custom logic Model.fit trained by covering Model.train_step.
- Easily write new training cycle, without having to worry about all the features you Model.fit process (distribution policy, callback, data format, circular logic, etc.)
- Now, SavedModel Keras format built to support all layers (including index, the pretreatment layer and stateful layer RNN)
tf.lite
:- Enable TFLite experimental new converter by default.
- XLA
- XLA can now build and run on Windows. All pre-built packages are shipped with XLA.
- It can be used on CPU and GPU "compiled or thrown" semantics tf.function enable XLA.
The new version includes a large number of bug fixes and other details can be found update:
https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0-rc1