Pytorch第三课:package-torch.Tensor包详解

版权声明:本文为王小草原创文章,要转载请先联系本人哦 https://blog.csdn.net/sinat_33761963/article/details/84594388

微博:https://weibo.com/wangxiaocaoai/profile?rightmod=1&wvr=6&mod=personinfo
微信公众号:搜索"AI躁动街"


本节要点:

1 张量的类型
2 张量的创建
3 张量的索引与切块
4 张量的操作
4.1 操作名有无下划线的区别
4.2 torch.Tensor所有操作列表
4.3 torch.Tensor操作举例感受
torch.Tensor是一种包含单一数据类型元素的多维矩阵。

1 张量的类型

Torch定义了七种CPU tensor类型和八种GPU tensor类型(如下表):

torch.Tensor是默认的tensor类型(torch.FlaotTensor)的简称
在这里插入图片描述

2 张量的创建

通过torch包的函数进行张量的创建在《Pytorch第一课:package-torch(1)之张量初识》一文中的第2节有详细介绍。

这里介绍的是torch.Tensor创建张量,有以下几类创建方式:
class torch.Tensor
class torch.Tensor(*sizes)
class torch.Tensor(size)
class torch.Tensor(sequence)
class torch.Tensor(ndarray)
class torch.Tensor(tensor)
class torch.Tensor(storage)

# 先导入torch包
import torch

2.1 无参数创建

如果没有提供参数,将会返回一个空的零维张量

class torch.Tensor

a = torch.Tensor()
print(a)
tensor([])

2.2 从规定其大小创建

class torch.Tensor(*sizes)

class torch.Tensor(size)

# 创建3*4大小的Int类型的张量
a = torch.IntTensor(3, 4)
print(a)

# 也可以指定填充的值为0
a = torch.IntTensor(3, 4).zero_()
print(a)
tensor([[ 0.0000e+00, -8.0531e+08,  1.3174e+09,  5.3687e+08],
        [-3.9761e+08,  3.2644e+04, -3.9720e+08,  3.2644e+04],
        [ 3.0850e+08,  1.0000e+00,  0.0000e+00,  1.9661e+05]], dtype=torch.int32)
tensor([[ 0,  0,  0,  0],
        [ 0,  0,  0,  0],
        [ 0,  0,  0,  0]], dtype=torch.int32)

2.3 从python的List序列创建

如果提供了python序列,将会从序列的副本创建一个tensor。

class torch.Tensor(sequence)

a = torch.Tensor([[1,2,3], [4,5,6]])
print(a)
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])

2.4 从numpy创建

class torch.Tensor(ndarray)

import numpy as np
a_np = np.arange(1,10)
print(a_np)
a = torch.Tensor(a_np)
print(a)
[1 2 3 4 5 6 7 8 9]
tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.])

2.5 从已有张量中创建

如果提供了torch.Tensor或torch.Storage,将会返回一个有同样参数的tensor.

class torch.Tensor(tensor)

class torch.Tensor(storage)

a = torch.Tensor([1,2,3])
b = torch.Tensor(a)
print(b)
tensor([ 1.,  2.,  3.])

每一个张量tensor都有一个相应的torch.Storage用来保存其数据。类tensor提供了一个存储的多维的、横向视图,并且定义了在数值运算。

3 张量的索引与切块

张量的索引与切块在《Pytorch第一课:package-torch(1)之张量初识》一文中的第3节有详细介绍。此处介绍可以用python的索引和切片来获取和修改一个张量tensor中的内容。

# 创建一个张量
a  = torch.Tensor([[1,2,3], [4,5,6]])
print(a)

# 通过索引获取张量的内容
print(a[1][2])

# 修改张量的内容
a[1][2] = 10
print(a)
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])
tensor(6.)
tensor([[  1.,   2.,   3.],
        [  4.,   5.,  10.]])

4 张量的操作

《Pytorch第二课:package-torch(2)之数学操作》一文中有详细介绍了torch包中对于张量的操作。

本章要介绍的是torch.Tensor包中的张量的操作。两者的操作大部分是相同的功能,可以互相参考,torch包没有的功能,在4.3节会详细举例介绍。

注意:会改变tensor的函数操作会用一个下划线后缀来标示。比如,torch.FloatTensor.abs_()会在原地计算绝对值,并返回改变后的tensor,而tensor.FloatTensor.abs()将会在一个新的tensor中计算结果。

4.1 有无下划线的比较

给一个简单的例子来体会一下有无下划线的区别:

# 创建一个tensor
a = torch.tensor([-1,-2,3])
print(a)

# 直接在原tensor上做改变
a.abs_()
print(a)
tensor([-1, -2,  3])
tensor([ 1,  2,  3])
# 创建一个tensor
a = torch.tensor([-1,-2,3])
print(a)

# 并不会改变a
a.abs()  
print(a)

b = a.abs()
print(b)
tensor([-1, -2,  3])
tensor([-1, -2,  3])
tensor([ 1,  2,  3])

4.2 所有torch.Tensor的操作

注意,不要将以下torch.Tensor的操作操作和torch包中的同名操作混淆,torch.Tensor的操作是可以直接作用在原张量上的。

比如.byte(),.char()直接作用在tensor上进行原tensor的修改:

a = torch.Tensor([1,2,3]).byte()
print(a.type())

a = torch.Tensor([1,2,3]).char()
print(a.type())

但是同名操作的功能是相同的,可以参见torch包。以下是所有torch.Tensor的操作:
abs() → Tensor
abs_() → Tensor
acos() → Tensor
acos_() → Tensor
add(value)
add_(value)
addbmm(beta=1, mat, alpha=1, batch1, batch2) → Tensor
addbmm_(beta=1, mat, alpha=1, batch1, batch2) → Tensor
addcdiv(value=1, tensor1, tensor2) → Tensor
addcdiv_(value=1, tensor1, tensor2) → Tensor
addcmul(value=1, tensor1, tensor2) → Tensor
addcmul_(value=1, tensor1, tensor2) → Tensor
addmm(beta=1, mat, alpha=1, mat1, mat2) → Tensor
addmm_(beta=1, mat, alpha=1, mat1, mat2) → Tensor
addmv(beta=1, tensor, alpha=1, mat, vec) → Tensor
addmv_(beta=1, tensor, alpha=1, mat, vec) → Tensor
addr(beta=1, alpha=1, vec1, vec2) → Tensor
addr_(beta=1, alpha=1, vec1, vec2) → Tensor
apply_(callable) → Tensor
asin() → Tensor
asin_() → Tensor
atan() → Tensor
atan2() → Tensor
atan2_() → Tensor
atan_() → Tensor
baddbmm(beta=1, alpha=1, batch1, batch2) → Tensor
baddbmm_(beta=1, alpha=1, batch1, batch2) → Tensor
bernoulli() → Tensor
bernoulli_() → Tensor
bmm(batch2) → Tensor
byte() → Tensor
bmm(median=0, sigma=1, *, generator=None) → Tensor
ceil() → Tensor
ceil_() → Tensor
char()
chunk(n_chunks, dim=0) → Tensor
clamp(min, max) → Tensor
clamp_(min, max) → Tensor
clone() → Tensor
contiguous() → Tensor
copy_(src, async=False) → Tensor
cos() → Tensor
cos_() → Tensor
cosh() → Tensor
cosh_() → Tensor
cpu() → Tensor
cross(other, dim=-1) → Tensor
cuda(device=None, async=False)
cumprod(dim) → Tensor
cumsum(dim) → Tensor
data_ptr() → int
diag(diagonal=0) → Tensor
dim() → int
dist(other, p=2) → Tensor
div(value)
div_(value)
dot(tensor2) → float
double()
eig(eigenvectors=False) -> (Tensor, Tensor)
element_size() → int
eq(other) → Tensor
eq_(other) → Tensor
equal(other) → bool
exp() → Tensor
exp_() → Tensor
expand(*sizes)
expand_as(tensor)
exponential_(lambd=1, , generator=None) t o to Tensor
fill_(value) → Tensor
float()
floor() → Tensor
floor_() → Tensor
fmod(divisor) → Tensor
fmod_(divisor) → Tensor
frac() → Tensor
frac_() → Tensor
gather(dim, index) → Tensor
ge(other) → Tensor
ge_(other) → Tensor
gels(A) → Tensor
geometric_(p, , generator=None) → Tensor
geqrf() -> (Tensor, Tensor)
ger(vec2) → Tensor
gesv(A) → Tensor, Tensor
gt(other) → Tensor
gt_(other) → Tensor
half()
histc(bins=100, min=0, max=0) → Tensor
index(m) → Tensor
index_add_(dim, index, tensor) → Tensor
index_copy_(dim, index, tensor) → Tensor
index_fill_(dim, index, val) → Tensor
index_select(dim, index) → Tensor
int()
inverse() → Tensor
is_contiguous() → bool
is_cuda
is_pinned()
is_set_to(tensor) → bool
is_signed()
kthvalue(k, dim=None) -> (Tensor, LongTensor)
le(other) → Tensor
le_(other) → Tensor
lerp(start, end, weight)
lerp_(start, end, weight) → Tensor
log() → Tensor
loglp() → Tensor
loglp_() → Tensor
log_()→ Tensor
log_normal_(mwan=1, std=2, , gegnerator=None
)
long()
lt(other) → Tensor
lt_(other) → Tensor
map_(tensor, callable)
masked_copy_(mask, source)
masked_fill_(mask, value)
masked_select(mask) → Tensor
max(dim=None) -> float or(Tensor, Tensor)
mean(dim=None) -> float or(Tensor, Tensor)
median(dim=-1, value=None, indices=None) -> (Tensor, LongTensor)
min(dim=None) -> float or(Tensor, Tensor)
mm(mat2) → Tensor
mode(dim=-1, value=None, indices=None) -> (Tensor, LongTensor)
mul(value) → Tensor
mul_(value)
multinomial(num_samples, replacement=False, , generator=None
) → Tensor
mv(vec) → Tensor
narrow(dimension, start, length) → Te
ndimension() → int
ne(other) → Tensor
ne_(other) → Tensor
neg() → Tensor
neg_() → Tensor
nelement() → int
new(args, kwargs)
nonezero() → LongTensor
norm(p=2) → float
normal_(mean=0, std=1, , gengerator=None
)
numel() → int
numpy() → ndarray
orgqr(input2) → Tensor
ormqr(input2, input3, left=True, transpose=False) → Tensor
permute(dims)
pin_memory()
potrf(upper=True) → Tensor
potri(upper=True) → Tensor
potrs(input2, upper=True) → Tensor
pow(exponent)
pow_()
prod()) → float
pstrf(upper=True, tol=-1) -> (Tensor, IntTensor)
qr()-> (Tensor, IntTensor)
random_(from=0, to=None, *, generator=None)
reciprocal() → Tensor
reciprocal_() → Tensor
remainder(divisor) → Tensor
remainder_(divisor) → Tensor
renorm(p, dim, maxnorm) → Tensor
renorm_(p, dim, maxnorm) → Tensor
repeat(*sizes)
resize_(*sizes)
resize_as_(tensor)
round() → Tensor
round_() → Tensor
rsqrt() → Tensor
rsqrt_() → Tensor
scatter_(input, dim, index, src) → Tensor
select(dim, index) → Tensor or number
set(source=None, storage_offset=0, size=None, stride=None)
share_memory_()
short()
sigmoid() → Tensor
sigmoid_() → Tensor
sign() → Tensor
sign_() → Tensor
sin() → Tensor
sin_() → Tensor
sinh() → Tensor
sinh_() → Tensor
size() → torch.Size
sort(dim=None, descending=False) -> (Tensor, LongTensor)
split(split_size, dim=0)
sqrt() → Tensor
sqrt_() → Tensor
squeeze(dim=None) → Tensor
squeeze_(dim=None) → Tensor
std() → float
storage() → torch.Storage
storage_offset() → int
classmethod() storage_type()
stride() → Tensor
sub(value, other) → Tensor
sub_(x) → Tensor
sum(dim=None) → Tensor
svd(some=True) -> (Tensor, Tensor, Tensor)
symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)
t() → Tensor
t() → Tensor
tan() → Tensor
tan
() → Tensor
tanh() → Tensor
tanh_() → Tensor
tolist()
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
trace() → float
transpose(dim0, dim1) → Tensor
transpose(dim0, dim1) → Tensor
tril(k=0) → Tensor
tril_(k=0) → Tensor
triu(k=0) → Tensor
triu(k=0) → Tensor
trtrs(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
trunc() → Tensor
trunc() → Tensor
type(new_type=None, async=False)
type_as(tesnor)
unfold(dim, size, step) → Tensor
uniform_(from=0, to=1) → Tensor
unsqueeze(dim)
unsqueeze_(dim) → Tensor
var()
view(*args) → Tensor
view_as(tensor)
zero_()

4.3 某些操作举例理解

4.3.1 投射与转变张量类型

# 1.将张量投射成某个类型
print(torch.Tensor([1,2,3]).byte().type())
print(torch.Tensor([1,2,3]).char().type())
print(torch.Tensor([1,2,3]).long().type())
print(torch.Tensor([1,2,3]).float().type())
print(torch.Tensor([1,2,3]).short().type())
torch.ByteTensor
torch.CharTensor
torch.LongTensor
torch.FloatTensor
torch.ShortTensor
# 2.type(new_type=None, async=False):将对象投为指定的类型。
a = torch.Tensor([1,2,3]).byte()
print(a.type(torch.FloatTensor).type())
torch.FloatTensor
# 3.type_as(tesnor):转变张量的类型
a = torch.Tensor([1,2,3]).byte()
b = torch.Tensor([1,2,3]).float()

print(b.type())
print(b.type_as(a).type())
torch.FloatTensor
torch.ByteTensor
# 4.new(args, *kwargs):构建一个有相同数据类型的tensor
a = torch.FloatTensor(2,3)
b = a.new()
print(b.type())
torch.FloatTensor

4.3.2 复制张量

# 1.clone():返回与原tensor有相同大小和数据类型的tensor
a = torch.tensor([1,2,3])
b = a.clone()
print(b)
tensor([ 1,  2,  3])
# 2.contiguous():返回一个内存连续的有相同数据的tensor,如果原tensor内存连续则返回原tensor
a = torch.tensor([1,2,3])
b = a.contiguous()
print(b)
tensor([ 1,  2,  3])
# 3.index_add_(dim, index, tensor) → Tensor:按参数index中的索引数确定的顺序,将参数tensor中的元素加到原来的tensor中
x = torch.Tensor([[1, 1, 1], [1, 1, 1], [1, 1, 1]])
t = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
index = torch.LongTensor([0, 2, 1])
print(x.index_add_(0, index, t))
tensor([[  2.,   3.,   4.],
        [  8.,   9.,  10.],
        [  5.,   6.,   7.]])
# 4.index_copy_(dim, index, tensor) → Tensor:按参数index中的索引数确定的顺序,将参数tensor中的元素复制到原来的tensor中。
x = torch.Tensor(3, 3)
t = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
index = torch.LongTensor([0, 2, 1])
print(x.index_copy_(0, index, t))
tensor([[ 1.,  2.,  3.],
        [ 7.,  8.,  9.],
        [ 4.,  5.,  6.]])
# 5.scatter_(input, dim, index, src) → Tensor
# 将src中的所有值按照index确定的索引写入本tensor中。其中索引是根据给定的dimension,dim按照gather()描述的规则来确定。
x = torch.Tensor(3, 3)
t = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
index = torch.LongTensor([[0, 2, 1],[1,0,2],[0,1,2]])
print(x.scatter_(0, index, t))
tensor([[ 7.,  5.,  0.],
        [ 4.,  8.,  3.],
        [ 0.,  2.,  9.]])
# 6.masked_copy_(mask, source) :将mask中值为1元素对应的source中位置的元素复制到本tensor中。mask应该有和本tensor相同数目的元素。
a = torch.zeros(3, 4).byte()
index = torch.LongTensor([0])
mask = a.index_fill_(0, index, 1)
print(mask)
tensor([[ 1,  1,  1,  1],
        [ 0,  0,  0,  0],
        [ 0,  0,  0,  0]], dtype=torch.uint8)
source = torch.randn(3, 4)
print(source)

target = torch.ones(3,4).masked_copy_(mask, source)
print(target)
tensor([[ 2.2827,  1.1442, -1.2416,  0.0778],
        [ 1.1207,  0.1266, -0.1235, -1.4134],
        [-0.7135,  0.6966,  0.5154,  0.1306]])
tensor([[ 2.2827,  1.1442, -1.2416,  0.0778],
        [ 1.0000,  1.0000,  1.0000,  1.0000],
        [ 1.0000,  1.0000,  1.0000,  1.0000]])


/Users/wangxiaocao/miniconda3/lib/python3.6/site-packages/torch/tensor.py:292: UserWarning: masked_copy_ is deprecated and renamed to masked_scatter_, and will be removed in v0.3
  warnings.warn("masked_copy_ is deprecated and renamed to masked_scatter_, and will be removed in v0.3")

4.3.3 获取张量的地址与大小

# 1.data_ptr() → int:返回tensor第一个元素的地址
a = torch.tensor([1,2,3])
print(a.data_ptr())
140717884980528
# 2.dim() → int:返回tensor的维数
a = torch.randn(3,4)
print(a.dim())
2
# 3.element_size() → int:返回单个元素的字节大小
print(torch.FloatTensor().element_size())
4
# 4.size() → torch.Size:返回tensor的大小。
a = torch.randn(4,4)
print(a.size())
torch.Size([4, 4])
# 5.storage_offset() → int:以储存元素的个数的形式返回tensor在地城内存中的偏移量。
a = torch.Tensor([1, 2, 3, 4, 5])
print(a.storage_offset())
print(a[3:].storage_offset())
0
3
# 6.stride() → Tensor:返回tesnor的步长。
a = torch.Tensor([1, 2, 3, 4, 5])
print(a.stride())

a = torch.Tensor([1, 3, 5, 7, 9])
print(a.stride())
(1,)
(1,)

4.3.4 扩大或缩小张量的维度

# 1.expand(*sizes):返回tensor的一个新视图,单个维度扩大为更大的尺寸
x = torch.Tensor([[1], [2], [3]])
print(x)
print(x.size())
tensor([[ 1.],
        [ 2.],
        [ 3.]])
torch.Size([3, 1])
x = x.expand(3, 4)
print(x)
print(x.size())
tensor([[ 1.,  1.,  1.,  1.],
        [ 2.,  2.,  2.,  2.],
        [ 3.,  3.,  3.,  3.]])
torch.Size([3, 4])
# 2.narrow(dimension, start, length) → Te
# 返回一个本tensor经过缩小后的tensor。维度dim缩小范围是start到start+length。原tensor与返回的tensor共享相同的底层内存。
a = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

a = a.narrow(0, 0, 2) # 在维度0上保留[0,2)维
print(a)

a = a.narrow(1, 0, 2) # 在维度1#上保留[0,2)维
print(a)
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])
tensor([[ 1.,  2.],
        [ 4.,  5.]])
# 3.repeat(*sizes):沿着指定的维度重复tensor。不同于expand(),本函数复制的是tensor中的数据。
x = torch.Tensor([1, 2, 3])
print(x.repeat(4, 2))

print(x.repeat(4, 2, 1).size())
tensor([[ 1.,  2.,  3.,  1.,  2.,  3.],
        [ 1.,  2.,  3.,  1.,  2.,  3.],
        [ 1.,  2.,  3.,  1.,  2.,  3.],
        [ 1.,  2.,  3.,  1.,  2.,  3.]])
torch.Size([4, 2, 3])

4.3.5 重新调整张量的维度

# 1.resize_(*sizes):将tensor的大小调整为指定的大小。
x = torch.Tensor([[1, 2], [3, 4], [5, 6]])
x.resize_(2, 2)
print(x)
tensor([[ 1.,  2.],
        [ 3.,  4.]])
# 2.resize_as_(tensor):将本tensor的大小调整为与参数中的tensor相同的大小。
a = torch.Tensor(4,4).resize_as_(x)
print(a.size())
torch.Size([2, 2])
# 3.view(*args) → Tensor:返回一个有相同数据但大小不同的tensor。
a = torch.Tensor([[1,2],[3,4]])
print(a.size())
b = a.view(4)
print(b.size())
c = a.view(-1,1)
print(c.size())
torch.Size([2, 2])
torch.Size([4])
torch.Size([4, 1])
# 4.view_as(tensor):返回被视作与给定的tensor相同大小的原tensor。
# 数据元素和大小必须相同
a = torch.Tensor([[1,2],[3,4]])
b = torch.Tensor([1,2,3,4])
print(a.view_as(b))
tensor([ 1.,  2.,  3.,  4.])
# 5.permute(dims):将tensor的维度换位。
x = torch.randn(2, 3, 5)
print(x.size())

print(x.permute(2, 0, 1).size())
torch.Size([2, 3, 5])
torch.Size([5, 2, 3])

4.3.6 填充张量

# 1.zero_():用0填充该tensor。
a = torch.Tensor(10).zero_()
print(a)
tensor([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.])
# 2.uniform_(from=0, to=1) → Tensor:将tensor用从均匀分布中抽样得到的值填充。
a = torch.Tensor(5).uniform_()
print(a)
tensor([ 0.8421,  0.8173,  0.5499,  0.5176,  0.7386])
# 3.exponential_(lambd=1, *, generator=None) $to$ Tensor:将该tensor用指数分布得到的元素填充:
a = torch.Tensor(5).exponential_()
print(a)
tensor([ 1.2825,  0.2071,  1.7803,  0.1555,  1.2808])
# 4.geometric_(p, *, generator=None) → Tensor:将该tensor用几何分布得到的元素填充:
a = torch.Tensor(5).geometric_(0.9)
print(a)
tensor([  8.,  23.,   6.,  14.,   8.])
# 5.log_normal_(mwan=1, std=2, , gegnerator=None*):将该tensor用均值为$\mu$,标准差为$\sigma$的对数正态分布得到的元素填充。
a = torch.Tensor(5).log_normal_()
print(a)
tensor([  3.7686,   0.9082,   0.4148,   0.4026,  78.2096])
# 6.normal_(mean=0, std=1, , gengerator=None*):将tensor用均值为mean和标准差为std的正态分布填充。
a = torch.Tensor(5).normal_()
print(a)
tensor([-1.5053, -1.2367,  0.1126, -0.4452, -0.8111])
# 7.random_(from=0, to=None, *, generator=None):将tensor用从在[from, to-1]上的正态分布或离散正态分布取样值进行填充。
a = torch.Tensor(5).random_(0, 10)
print(a)
tensor([ 9.,  2.,  9.,  3.,  4.])
# 8.fill_(value) → Tensor:将该tensor用指定的数值填充
a = torch.Tensor(5).fill_(1)
print(a)
tensor([ 1.,  1.,  1.,  1.,  1.])
# 9.ndex_fill_(dim, index, val) → Tensor:按参数index中的索引数确定的顺序,将原tensor用参数val值填充。
x = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
index = torch.LongTensor([0, 2])
print(x.index_fill_(0, index, -1))
tensor([[-1., -1., -1.],
        [ 4.,  5.,  6.],
        [-1., -1., -1.]])
# 10.masked_fill_(mask, value):在mask值为1的位置处用value填充。mask的元素个数需和本tensor相同,但尺寸可以不同。
target = torch.ones(3,4).masked_fill_(mask, 10)
print(target)
tensor([[ 10.,  10.,  10.,  10.],
        [  1.,   1.,   1.,   1.],
        [  1.,   1.,   1.,   1.]])

4.3.7 布尔判断

# 1.is_contiguous() → bool:如果该tensor在内存中是连续的则返回True。
x = torch.Tensor(3, 3)
print(x.is_contiguous())
True
# 2.is_pinned():如果该tensor在固定内内存中则返回True
x = torch.Tensor(3, 3)
print(x.is_pinned())
False

4.3.7 张量与numpy的转换

# 1.numpy() → ndarray:将该tensor以NumPy的形式返回ndarray,两者共享相同的底层内存。原tensor改变后会相应的在ndarray有反映,反之也一样。
a = torch.FloatTensor(2,3).numpy()
print(type(a))
<class 'numpy.ndarray'>

4.3.8 张量的切片

# 1.select(dim, index) → Tensor or number:按照index中选定的维度将tensor切片。
a = torch.randn(4,4)
print(a)

print(a.select(0, 2))
print(a.select(1, 2))
tensor([[ 1.8906, -0.0250,  1.2072,  0.0943],
        [ 0.1206,  0.1571,  0.4906, -0.3948],
        [-0.1146, -0.6196,  0.1902,  0.5163],
        [ 0.2262,  0.8488,  2.7417, -1.4334]])
tensor([-0.1146, -0.6196,  0.1902,  0.5163])
tensor([ 1.2072,  0.4906,  0.1902,  2.7417])
# 2.unfold(dim, size, step) → Tensor:返回一个tensor,其中含有在dim维tianchong度上所有大小为size的分片。
a = torch.arange(1,8)
print(a)
print(a.unfold(0, 2, 1))
print(a.unfold(0, 3, 1))
print(a.unfold(0, 2, 2))
tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.])
tensor([[ 1.,  2.],
        [ 2.,  3.],
        [ 3.,  4.],
        [ 4.,  5.],
        [ 5.,  6.],
        [ 6.,  7.]])
tensor([[ 1.,  2.,  3.],
        [ 2.,  3.,  4.],
        [ 3.,  4.,  5.],
        [ 4.,  5.,  6.],
        [ 5.,  6.,  7.]])
tensor([[ 1.,  2.],
        [ 3.,  4.],
        [ 5.,  6.]])

4.3.9 张量的函数作用

apply_(callable) → Tensor:将函数callable作用于tensor中每一个元素,并将每个元素用callable函数返回值替代。

map_(tensor, callable):将callable作用于本tensor和参数tensor中的每一个元素,并将结果存放在本tensor中。

猜你喜欢

转载自blog.csdn.net/sinat_33761963/article/details/84594388
今日推荐