pytorch framework learning (2) using GPU training

Use GPU for training & common Tensor&Hub modules

Data and models need to be passed into CUDA for calculation.

In terms of code:
the difference from the traditional one is that we need to specify a device, this device is set to cuda: 0
needs to place the model in the device we set (in CUDA) .to(device)
needs to place the data in In the device I set (in CUDA) .to(device)

insert image description here

What are the common forms of Tensor?

form Remark Code & Comments
Scalar value dim is 0, a scalar
Vector vector dim is 1, which usually refers to features in deep learning, such as word vectors, features of a certain dimension, etc.
matrix matrix dim is 2, and one row is a vector. It can be considered that each row is each person, and each column is the same feature. Matrix can do multiplication and inner product
n-dimensal tensor High-dimensional tensor Oh yeah...

Powerful Hub Module

Hub is to make it easier for developers to call existing codes. It is a model zoo
torch.hub.load()
learning generally does not need this

Guess you like

Origin blog.csdn.net/vibration_xu/article/details/125961338