pytorch学习手册【一】

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_27825451/article/details/89348947

 

一、关于Tensors的一些判断

torch.is_tensor(obj)

torch.is_storage(obj)

torch.set_default_dtype(d) 默认的type为torch.float32

torch.get_default_dtype() → torch.dtype 

torch.set_default_tensor_type(t)

torch.numel(input) → int  返回tensor中所有的元素个数

torch.set_printoptions(precision=Nonethreshold=Noneedgeitems=Nonelinewidth=Noneprofile=None)

设置print的相关选项

Parameters:
  • precision – Number of digits of precision for floating point output (default = 4).
  • threshold – Total number of array elements which trigger summarization rather than full repr(default = 1000).
  • edgeitems – Number of array items in summary at beginning and end of each dimension (default = 3).
  • linewidth – The number of characters per line for the purpose of inserting line breaks (default = 80). Thresholded matrices will ignore this parameter.
  • profile – Sane defaults for pretty printing. Can override with any of the above options. (any one of default, short, full)

torch.set_flush_denormal(mode) → bool

Parameters: mode (bool) – Controls whether to enable flush denormal mode or not

二、创建tensor的一些方法

注意:张量的随机创建会在下面的random sampling里面再说明。

torch.tensor(datadtype=Nonedevice=Nonerequires_grad=False) → Tensor

torch.sparse_coo_tensor(indicesvaluessize=Nonedtype=Nonedevice=Nonerequires_grad=False)→ Tensor

torch.as_tensor(datadtype=Nonedevice=None) → Tensor

torch.from_numpy(ndarray) → Tensor

torch.zeros(*sizesout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.zeros_like(inputdtype=Nonelayout=Nonedevice=Nonerequires_grad=False) → Tensor

torch.ones(*sizesout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.ones_like(inputdtype=Nonelayout=Nonedevice=Nonerequires_grad=False) → Tensor

torch.arange(start=0endstep=1out=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.range(start=0endstep=1out=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.linspace(startendsteps=100out=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.logspace(startendsteps=100out=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.eye(nm=Noneout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.empty(*sizesout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.empty_like(inputdtype=Nonelayout=Nonedevice=Nonerequires_grad=False) → Tensor

torch.full(sizefill_valueout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.full_like(inputfill_valueout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

三、Indexing, Slicing, Joining, Mutating(转变/变化)

torch.cat(tensorsdim=0out=None) → Tensor

torch.chunk(tensorchunksdim=0) → List of Tensors。在某一个维度将一个tensor分成几等份,chunks为int,即需要分成的份数

torch.gather(inputdimindexout=None) → Tensor。Gathers values along an axis specified by dim.

torch.index_select(inputdimindexout=None) → Tensor,类似于标准库slice函数的作用

torch.masked_select(inputmaskout=None) → Tensor

torch.narrow(inputdimensionstartlength) → Tensor

torch.nonzero(inputout=None) → LongTensor,返回所有非零元素的位置索引,返回的是索引哦

torch.reshape(inputshape) → Tensor

torch.split(tensorsplit_size_or_sectionsdim=0)

torch.squeeze(inputdim=Noneout=None) → Tensor,将维度=1的那个维度(即只包含一个元素的维度)去掉,即所谓的压榨

torch.stack(seqdim=0out=None) → Tensor

torch.t(input) → Tensor

torch.take(inputindices) → Tensor

torch.transpose(inputdim0dim1) → Tensor

torch.unbind(tensordim=0) → seq

torch.unsqueeze(inputdimout=None) → Tensor

torch.where(conditionxy) → Tensor

四、Random sampling建立随机矩阵

torch.manual_seed(seed)
torch.initial_seed()

torch.get_rng_state()

torch.set_rng_state(new_state)

torch.default_generator = <torch._C.Generator object>

torch.bernoulli(input*generator=Noneout=None) → Tensor

torch.multinomial(inputnum_samplesreplacement=Falseout=None) → LongTensor

torch.normal()

torch.normal(meanstdout=None) → Tensor

torch.rand(*sizesout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.rand_like(inputdtype=Nonelayout=Nonedevice=Nonerequires_grad=False) → Tensor

torch.randint(low=0highsizeout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.randint_like(inputlow=0highdtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.randn(*sizesout=Nonedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False)→ Tensor

torch.randn_like(inputdtype=Nonelayout=Nonedevice=Nonerequires_grad=False) → Tensor

torch.randperm(nout=Nonedtype=torch.int64layout=torch.strideddevice=Nonerequires_grad=False) → LongTensor

In-place random sampling

There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:

五、Serialization序列化

torch.save(objfpickle_module=<module 'pickle' from '/private/home/soumith/anaconda3/lib/python3.6/pickle.py'>pickle_protocol=2)

将一个tensor保存带磁盘

Parameters:
  • obj – saved object
  • f – a file-like object (has to implement write and flush) or a string containing a file name
  • pickle_module – module used for pickling metadata and objects
  • pickle_protocol – can be specified to override the default protocol

torch.load(fmap_location=Nonepickle_module=<module 'pickle' from '/private/home/soumith/anaconda3/lib/python3.6/pickle.py'>)

六、Parallelism并行

torch.get_num_threads() → int

Gets the number of OpenMP threads used for parallelizing CPU operations

torch.set_num_threads(int)

Sets the number of OpenMP threads used for parallelizing CPU operations

七、设置是否可求梯度Locally disabling gradient computation

 torch.no_grad()

torch.enable_grad()

torch.set_grad_enabled()

八、Math operations数学运算操作

8.1 逐点运算Pointwise Ops

torch.abs(inputout=None) → Tensor

torch.acos(inputout=None) → Tensor

torch.add()

torch.add(inputvalueout=None)

torch.add(inputvalue=1otherout=None)

torch.addcdiv(tensorvalue=1tensor1tensor2out=None) → Tensor。首先求tensor1除以tensor2,然后用得到的结果乘以value,然后再加到tensor上面去。

torch.addcmul(tensorvalue=1tensor1tensor2out=None) → Tensor。首先求tensor1乘以ensor2,然后用得到的结果乘以value,然后再加到tensor上面去。

torch.asin(inputout=None) → Tensor

torch.atan(inputout=None) → Tensor

torch.atan2(input1input2out=None) → Tensor

torch.ceil(inputout=None) → Tensor

torch.clamp(inputminmaxout=None) → Tensor。将tensor中所有小于min的数字用min代替,所有大于max的数字用max代替

torch.clamp(input*minout=None) → Tensor

torch.clamp(input*maxout=None) → Tensor

torch.cos(inputout=None) → Tensor

torch.cosh(inputout=None) → Tensor

torch.div()

torch.div(inputvalueout=None) → Tensor

torch.div(inputotherout=None) → Tensor

torch.digamma(inputout=None) → Tensor

torch.erf(tensorout=None) → Tensor

torch.erfc(inputout=None) → Tensor

torch.erfinv(inputout=None) → Tensor

torch.exp(inputout=None) → Tensor

torch.expm1(inputout=None) → Tensor

torch.floor(inputout=None) → Tensor

torch.fmod(inputdivisorout=None) → Tensor

torch.frac(inputout=None) → Tensor

torch.lerp(startendweightout=None)

torch.log(inputout=None) → Tensor

torch.log10(inputout=None) → Tensor

torch.log1p(inputout=None) → Tensor

torch.log2(inputout=None) → Tensor

torch.mul()

torch.mul(inputvalueout=None)

torch.mul(inputotherout=None)

torch.mvlgamma(inputp) → Tensor

torch.neg(inputout=None) → Tensor

torch.pow()

torch.pow(inputexponentout=None) → Tensor

torch.pow(baseinputout=None) → Tensor

torch.reciprocal(inputout=None) → Tensor

torch.remainder(inputdivisorout=None) → Tensor

torch.fmod(),

torch.rsqrt(inputout=None) → Tensor

torch.sigmoid(inputout=None) → Tensor

torch.sign(inputout=None) → Tensor

torch.sin(inputout=None) → Tensor

torch.sinh(inputout=None) → Tensor

torch.sqrt(inputout=None) → Tensor

torch.tan(inputout=None) → Tensor

torch.tanh(inputout=None) → Tensor

torch.trunc(inputout=None) → Tensor

猜你喜欢

转载自blog.csdn.net/qq_27825451/article/details/89348947