PyTorch 1.4 release: support for Java and distributed parallel training model

PyTorch team last week released the latest  PyTorch 1.4 version . Update log display, this version includes submitted 1500 times, and the improvement in the JIT, ONNX, distributed, and performance Eager front ends, and the moving areas and quantification of experimental versions are also improved. 1.4 also adds new experimental features, including parallel distributed model RPC-based training as well as to the Java language bindings.

In addition, PyTorch 1.4 is the last version to support Python 2, but also supported the final version of C ++ 11. Therefore, the official suggested beginning to migrate from 1.4 to Python 3, and the use of C ++ 14 constructed to facilitate future transition from 1.4 to 1.5.

Update Highlights

Build custom support level is PyTorch Mobile

After the introduction of Pytorch Mobile in the experimental stage in 1.3, version 1.4 adds more support for mobile terminals, including a fine-grained level (fine-grain level) custom-built function of the script. This feature enables the mobile terminal to optimize the size of the developer library - only operators include their use in the model library, while in the process significantly reduces the space occupied by the device thereof. Early results showed that, PyTorch customized MobileNetV2 small movement end than the pre-built libraries 40-50%.

The sample code only required for selectively compiled MobileNetV2 operators to:

# Dump list of operators used by MobileNetV2:
import torch, yaml
model = torch.jit.load('MobileNetV2.pt')
ops = torch.jit.export_opnames(model)
with open('MobileNetV2.yaml', 'w') as output:
    yaml.dump(ops, output)
# Build PyTorch Android library customized for MobileNetV2:
SELECTED_OP_LIST=MobileNetV2.yaml scripts/build_pytorch_android.sh arm64-v8a

# Build PyTorch iOS library customized for MobileNetV2:
SELECTED_OP_LIST=MobileNetV2.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 scripts/build_ios.sh

Distributed Parallel model training (experimental)

With large models such as RoBERTa trillion level parameters such as the emergence of parallel training model for helping researchers to push the limits become increasingly important. This version provides a distributed RPC framework to support parallel distributed model training. This framework supports remote operation function, and a reference to the remote object under the premise without copying actual data. PyTorch also provides autograd and Optimizer API , can transparently in the background and cross-border RPC update parameters.

Java bindings (experimental)

In addition to supporting Python and C ++, but this version also adds experimental support for the Java bindings. PyTorch Mobile-based interface developed for Android, we can call TorchScript model from any Java program through a new Java bindings. Note, however, that this version of Java bindings support only Linux platform, and can only be used to model reasoning. The development team said they would extend support in subsequent editions.

For information about how to use PyTorch in Java, see the following code snippet:

Module mod = Module.load("demo-model.pt1");
Tensor data =
    Tensor.fromBlob(
        new int[] {1, 2, 3, 4, 5, 6}, // data
        new long[] {2, 3} // shape
        );
IValue result = mod.forward(IValue.from(data), IValue.from(3.0));
Tensor output = result.toTensor();
System.out.println("shape: " + Arrays.toString(output.shape()));
System.out.println("data: " + Arrays.toString(output.getDataAsFloatArray()));

Download: https://github.com/pytorch/pytorch/releases/tag/v1.4.0

Guess you like

Origin www.oschina.net/news/113007/pytorch-1-4-released