3.2 linear regression from scratch to achieve

3.2.1 generated data

Sample 1000 is generated, the two feature, using a linear regression model of the real weights w = [ 2 , 3.4 ] \boldsymbol{w} = [2, -3.4]^\top and bias b = 4.2 b = 4.2 , and a random noise term e \epsilon is generated tag
Y = X w + b + e , \ Boldsymbol {y} = \ boldsymbol {X} \ boldsymbol {w} + b + \ epsilon,
wherein the noise term e \epsilon with mean 0 and standard deviation of the normal distribution of 0.01.

Mxnet:

num_inputs = 2
num_examples = 1000
true_w = [2, -3.4]
true_b = 4.2

features = nd.random.normal(scale=1, shape=(num_examples, num_inputs))
labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b
labels += nd.random.normal(scale=0.01, shape=labels.shape)

Pytorch:

num_inputs = 2
num_examples = 1000

true_w = [2, -3.4]
true_b = 4.2

features = torch.randn(num_examples,num_inputs,
                      dtype=torch.float32)

labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()),
                       dtype=torch.float32)

3.2.2 read data

Through the data set and continue to read small amounts of data samples. Definition of a function: it returns each batch_sizefeature tag (batch size) random samples.
Mxnet:

# 本函数已保存在d2lzh包中方便以后使用
def data_iter(batch_size, features, labels):
    num_examples = len(features)
    indices = list(range(num_examples))
    random.shuffle(indices)  # 样本的读取顺序是随机的
    for i in range(0, num_examples, batch_size):
        j = nd.array(indices[i: min(i + batch_size, num_examples)])
        yield features.take(j), labels.take(j)  # take函数根据索引返回对应元素

Pytorch:

# 本函数已保存在d2lzh包中方便以后使用
def data_iter(batch_size, features, labels):
    num_examples = len(features)
    indices = list(range(num_examples))
    random.shuffle(indices)  # 样本的读取顺序是随机的
    for i in range(0, num_examples, batch_size):
        j = torch.LongTensor(indices[i: min(i + batch_size, num_examples)]) # 最后一次可能不足一个batch
        yield  features.index_select(0, j), labels.index_select(0, j)

Read 10 samples:

batch_size = 10

for X, y in data_iter(batch_size, features, labels):
    print(X, y)
    break

3.2.3 initialization parameter model

Weights initialized to 0 mean and standard deviation of the normal random number 0.01, the deviation is initialized to 0.
Mxnet:

w = nd.random.normal(scale=0.01, shape=(num_inputs, 1))
b = nd.zeros(shape=(1,))
# 创建梯度
w.attach_grad()
b.attach_grad()

Pytorch:

w = nd.random.normal(scale=0.01, shape=(num_inputs, 1))
b = nd.zeros(shape=(1,))
# 创建梯度
w.requires_grad_(requires_grad=True)
b.requires_grad_(requires_grad=True)

3.3.4 Definition Model

Mxnet:

def linreg(X, w, b):  # 本函数已保存在d2lzh包中方便以后使用
    return nd.dot(X, w) + b

Pytorch:

def linreg(X, w, b):  # 本函数已保存在d2lzh包中方便以后使用
    return torch.mm(X, w) + b

3.3.5 defined loss function

def squared_loss(y_hat, y):  # 本函数已保存在d2lzh包中方便以后使用
    return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2

3.3.6 custom optimization algorithm

Mxnet:

def sgd(params, lr, batch_size):  # 本函数已保存在d2lzh包中方便以后使用
    for param in params:
        param[:] = param - lr * param.grad / batch_size

Pytorch:

def sgd(params, lr, batch_size):  # 本函数已保存在d2lzh_pytorch包中方便以后使用
    for param in params:
        param.data -= lr * param.grad / batch_size # 注意这里更改param时用的param.data

3.3.7 training model

Mxnet:

lr = 0.03 #学习率
num_epochs = 3 #迭代周期个数

net = linreg
loss = squared_loss

for epoch in range(num_epochs):  # 训练模型一共需要num_epochs个迭代周期
    # 在每一个迭代周期中,会使用训练数据集中所有样本一次(假设样本数能够被批量大小整除)。X
    # 和y分别是小批量样本的特征和标签
    for X, y in data_iter(batch_size, features, labels):
        with autograd.record():
            l = loss(net(X, w, b), y)  # l是有关小批量X和y的损失
        l.backward()  # 小批量的损失对模型参数求梯度
        sgd([w, b], lr, batch_size)  # 使用小批量随机梯度下降迭代模型参数
    train_l = loss(net(features, w, b), labels)
    print('epoch %d, loss %f' % (epoch + 1, train_l.mean().asnumpy()))
true_w, w,true_b, b

Pytorch:

lr = 0.03 #学习率
num_epochs = 3 迭代周期个数

net = linreg
loss = squared_loss

for epoch in range(num_epochs):  # 训练模型一共需要num_epochs个迭代周期
    # 在每一个迭代周期中,会使用训练数据集中所有样本一次(假设样本数能够被批量大小整除)。X
    # 和y分别是小批量样本的特征和标签
    for X, y in data_iter(batch_size, features, labels):
        l = loss(net(X, w, b), y).sum()  # l是有关小批量X和y的损失
        l.backward()  # 小批量的损失对模型参数求梯度
        sgd([w, b], lr, batch_size)  # 使用小批量随机梯度下降迭代模型参数
        
        # 不要忘了梯度清零
        w.grad.data.zero_()
        b.grad.data.zero_()
    train_l = loss(net(features, w, b), labels)
    print('epoch %d, loss %f' % (epoch + 1, train_l.mean().item()))
true_w, w,true_b, b

Note:
In PyTorch, the gradient will be calculated value accumulates manually cleared.

Released nine original articles · won praise 0 · Views 184

Guess you like

Origin blog.csdn.net/qinhuiqiao/article/details/104315896