Pytorch utiliza el modelo de red neuronal NN para realizar el clásico problema de predicción del precio de la vivienda en Boston

Pytorch utiliza un modelo de red neuronal multicapa para implementar el clásico problema de predicción del precio de la vivienda en Boston.

Introducción al conjunto de datos de precios de la vivienda de Boston

El conjunto de datos de precios de la vivienda de Boston es un conjunto de datos de aprendizaje automático clásico que se utiliza para predecir el precio medio de las viviendas en el área de Boston. El conjunto de datos contiene 506 muestras, cada una con 13 características, incluidos varios indicadores de las ciudades, como la tasa de criminalidad, la proporción de suelo residencial, la proporción de suelo comercial no minorista en cada ciudad, etc. La variable objetivo es el precio medio de las viviendas en miles de dólares.

Aquí hay una lista de características para el conjunto de datos de precios de casas de Boston:

CRIM: índice de criminalidad en una ciudad ZN: proporción de terreno residencial de más de 25,000 pies cuadrados INDUS: proporción de terreno comercial no minorista en cada ciudad CHAS: variable
ficticia del río Charles (1 si cerca hay un río; 0 en caso contrario) NOX: 1 nitrógeno Concentraciones de óxido (partes por millón) RM: Número promedio de habitaciones por vivienda
AGE: Proporción de unidades ocupadas por sus propietarios construidas antes de 1940 DIS: Distancia ponderada a cinco centros de empleo de Boston RAD: Índice de accesibilidad de carreteras radiales TAX: Tasa de impuesto a la propiedad de valor total por $10,000
PTRATIO: Proporción de estudiantes por maestro en cada ciudad B: Calculado como 1000(Bk - 0.63)^2, donde Bk es la proporción de negros en la ciudad LSTAT: Porcentaje de precios de viviendas de bajos ingresos en Boston El conjunto de datos es
a
menudo utilizado para entrenamiento y pruebas en problemas de regresión destinados a predecir el precio medio de las casas. Este conjunto de datos se usa ampliamente en la enseñanza y la práctica del aprendizaje automático y la ciencia de datos para evaluar el rendimiento de diferentes algoritmos y modelos.

1. Introducir bibliotecas y módulos dependientes

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

2: preparar el conjunto de datos

# 加载Boston房价数据集
data = load_boston()
X, y = data['data'], data['target']
C:\Users\Admin\AppData\Roaming\Python\Python37\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function load_boston is deprecated; `load_boston` is deprecated in 1.0 and will be removed in 1.2.

    The Boston housing prices dataset has an ethical problem. You can refer to
    the documentation of this function for further details.

    The scikit-learn maintainers therefore strongly discourage the use of this
    dataset unless the purpose of the code is to study and educate about
    ethical issues in data science and machine learning.

    In this case special case, you can fetch the dataset from the original
    source::

        import pandas as pd
        import numpy as np


        data_url = "http://lib.stat.cmu.edu/datasets/boston"
        raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
        data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
        target = raw_df.values[1::2, 2]

    Alternative datasets include the California housing dataset (i.e.
    func:`~sklearn.datasets.fetch_california_housing`) and the Ames housing
    dataset. You can load the datasets as follows:

        from sklearn.datasets import fetch_california_housing
        housing = fetch_california_housing()

    for the California housing dataset and:

        from sklearn.datasets import fetch_openml
        housing = fetch_openml(name="house_prices", as_frame=True)

    for the Ames housing dataset.
    
  warnings.warn(msg, category=FutureWarning)

Ignora la advertencia, también puedes modificarla según el contenido de la advertencia

# 3. Divida el conjunto de entrenamiento y el conjunto de prueba

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

4. Normalización de datos

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

5. Convertir a tensor PyTorch

X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.float32).view(-1, 1)
y_test = torch.tensor(y_test, dtype=torch.float32).view(-1, 1)

4. Definir el modelo de red neuronal

class FeedforwardNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(FeedforwardNN, self).__init__()
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        return x

5. Definir las funciones de formación y evaluación

def train(model, criterion, optimizer, X, y, num_epochs=100, batch_size=32):
    model.train()
    num_samples = X.shape[0]
    num_batches = num_samples // batch_size

    for epoch in range(num_epochs):
        total_loss = 0
        for batch_idx in range(num_batches):
            start_idx = batch_idx * batch_size
            end_idx = start_idx + batch_size
            batch_X = X[start_idx:end_idx]
            batch_y = y[start_idx:end_idx]

            optimizer.zero_grad()
            outputs = model(batch_X)
            loss = criterion(outputs, batch_y)
            loss.backward()
            optimizer.step()

            total_loss += loss.item()

        print(f"Epoch {
      
      epoch + 1}/{
      
      num_epochs}, Loss: {
      
      total_loss / num_batches:.4f}")

def evaluate(model, criterion, X, y):
    model.eval()
    with torch.no_grad():
        outputs = model(X)
        loss = criterion(outputs, y)
        rmse = torch.sqrt(loss)
        mae = torch.mean(torch.abs(outputs - y))
    return loss.item(), rmse.item(), mae.item()

6: Entrenamiento y evaluación de carrera

# 设置模型参数
input_dim = X_train.shape[1]
hidden_dim = 64
output_dim = 1

7. Inicializar el modelo

model = FeedforwardNN(input_dim, hidden_dim, output_dim)

8. Definir la función de pérdida y el optimizador

criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

9. Modelo de formación

train(model, criterion, optimizer, X_train, y_train, num_epochs=500, batch_size=32)
Epoch 1/500, Loss: 7.6713
Epoch 2/500, Loss: 7.6533
Epoch 3/500, Loss: 7.6367
Epoch 4/500, Loss: 7.6191
Epoch 5/500, Loss: 7.6038
Epoch 6/500, Loss: 7.5861
Epoch 7/500, Loss: 7.5701
Epoch 8/500, Loss: 7.5533
Epoch 9/500, Loss: 7.5396
Epoch 10/500, Loss: 7.5219
Epoch 11/500, Loss: 7.5071
Epoch 12/500, Loss: 7.4910
Epoch 13/500, Loss: 7.4769
Epoch 14/500, Loss: 7.4604
Epoch 15/500, Loss: 7.4455
Epoch 16/500, Loss: 7.4301
Epoch 17/500, Loss: 7.4159
Epoch 18/500, Loss: 7.3997
Epoch 19/500, Loss: 7.3860
Epoch 20/500, Loss: 7.3718
Epoch 21/500, Loss: 7.3559
Epoch 22/500, Loss: 7.3419
Epoch 23/500, Loss: 7.3277
Epoch 24/500, Loss: 7.3155
Epoch 25/500, Loss: 7.3003
Epoch 26/500, Loss: 7.2862
Epoch 27/500, Loss: 7.2728
Epoch 28/500, Loss: 7.2588
Epoch 29/500, Loss: 7.2454
Epoch 30/500, Loss: 7.2323
Epoch 31/500, Loss: 7.2186
Epoch 32/500, Loss: 7.2040
Epoch 33/500, Loss: 7.1909
Epoch 34/500, Loss: 7.1771
Epoch 35/500, Loss: 7.1646
Epoch 36/500, Loss: 7.1500
Epoch 37/500, Loss: 7.1361
Epoch 38/500, Loss: 7.1248
Epoch 39/500, Loss: 7.1110
Epoch 40/500, Loss: 7.0965
Epoch 41/500, Loss: 7.0860
Epoch 42/500, Loss: 7.0732
Epoch 43/500, Loss: 7.0594
Epoch 44/500, Loss: 7.0482
Epoch 45/500, Loss: 7.0353
Epoch 46/500, Loss: 7.0239
Epoch 47/500, Loss: 7.0115
Epoch 48/500, Loss: 6.9981
Epoch 49/500, Loss: 6.9871
Epoch 50/500, Loss: 6.9741
Epoch 51/500, Loss: 6.9618
Epoch 52/500, Loss: 6.9509
Epoch 53/500, Loss: 6.9365
Epoch 54/500, Loss: 6.9262
Epoch 55/500, Loss: 6.9139
Epoch 56/500, Loss: 6.9012
Epoch 57/500, Loss: 6.8910
Epoch 58/500, Loss: 6.8762
Epoch 59/500, Loss: 6.8628
Epoch 60/500, Loss: 6.8507
Epoch 61/500, Loss: 6.8350
Epoch 62/500, Loss: 6.8210
Epoch 63/500, Loss: 6.8089
Epoch 64/500, Loss: 6.7953
Epoch 65/500, Loss: 6.7840
Epoch 66/500, Loss: 6.7698
Epoch 67/500, Loss: 6.7570
Epoch 68/500, Loss: 6.7478
Epoch 69/500, Loss: 6.7344
Epoch 70/500, Loss: 6.7222
Epoch 71/500, Loss: 6.7105
Epoch 72/500, Loss: 6.6986
Epoch 73/500, Loss: 6.6863
Epoch 74/500, Loss: 6.6732
Epoch 75/500, Loss: 6.6629
Epoch 76/500, Loss: 6.6499
Epoch 77/500, Loss: 6.6392
Epoch 78/500, Loss: 6.6261
Epoch 79/500, Loss: 6.6136
Epoch 80/500, Loss: 6.6037
Epoch 81/500, Loss: 6.5918
Epoch 82/500, Loss: 6.5786
Epoch 83/500, Loss: 6.5673
Epoch 84/500, Loss: 6.5566
Epoch 85/500, Loss: 6.5462
Epoch 86/500, Loss: 6.5339
Epoch 87/500, Loss: 6.5224
Epoch 88/500, Loss: 6.5124
Epoch 89/500, Loss: 6.5001
Epoch 90/500, Loss: 6.4900
Epoch 91/500, Loss: 6.4794
Epoch 92/500, Loss: 6.4690
Epoch 93/500, Loss: 6.4578
Epoch 94/500, Loss: 6.4471
Epoch 95/500, Loss: 6.4381
Epoch 96/500, Loss: 6.4284
Epoch 97/500, Loss: 6.4176
Epoch 98/500, Loss: 6.4070
Epoch 99/500, Loss: 6.3981
Epoch 100/500, Loss: 6.3892
Epoch 101/500, Loss: 6.3782
Epoch 102/500, Loss: 6.3686
Epoch 103/500, Loss: 6.3586
Epoch 104/500, Loss: 6.3521
Epoch 105/500, Loss: 6.3401
Epoch 106/500, Loss: 6.3315
Epoch 107/500, Loss: 6.3212
Epoch 108/500, Loss: 6.3127
Epoch 109/500, Loss: 6.3046
Epoch 110/500, Loss: 6.2946
Epoch 111/500, Loss: 6.2848
Epoch 112/500, Loss: 6.2760
Epoch 113/500, Loss: 6.2675
Epoch 114/500, Loss: 6.2588
Epoch 115/500, Loss: 6.2495
Epoch 116/500, Loss: 6.2413
Epoch 117/500, Loss: 6.2320
Epoch 118/500, Loss: 6.2219
Epoch 119/500, Loss: 6.2147
Epoch 120/500, Loss: 6.2047
Epoch 121/500, Loss: 6.1957
Epoch 122/500, Loss: 6.1858
Epoch 123/500, Loss: 6.1764
Epoch 124/500, Loss: 6.1671
Epoch 125/500, Loss: 6.1602
Epoch 126/500, Loss: 6.1496
Epoch 127/500, Loss: 6.1408
Epoch 128/500, Loss: 6.1315
Epoch 129/500, Loss: 6.1248
Epoch 130/500, Loss: 6.1140
Epoch 131/500, Loss: 6.1068
Epoch 132/500, Loss: 6.0980
Epoch 133/500, Loss: 6.0892
Epoch 134/500, Loss: 6.0806
Epoch 135/500, Loss: 6.0731
Epoch 136/500, Loss: 6.0651
Epoch 137/500, Loss: 6.0563
Epoch 138/500, Loss: 6.0487
Epoch 139/500, Loss: 6.0428
Epoch 140/500, Loss: 6.0331
Epoch 141/500, Loss: 6.0275
Epoch 142/500, Loss: 6.0188
Epoch 143/500, Loss: 6.0125
Epoch 144/500, Loss: 6.0041
Epoch 145/500, Loss: 5.9995
Epoch 146/500, Loss: 5.9901
Epoch 147/500, Loss: 5.9834
Epoch 148/500, Loss: 5.9781
Epoch 149/500, Loss: 5.9689
Epoch 150/500, Loss: 5.9638
Epoch 151/500, Loss: 5.9542
Epoch 152/500, Loss: 5.9498
Epoch 153/500, Loss: 5.9417
Epoch 154/500, Loss: 5.9355
Epoch 155/500, Loss: 5.9283
Epoch 156/500, Loss: 5.9228
Epoch 157/500, Loss: 5.9137
Epoch 158/500, Loss: 5.9079
Epoch 159/500, Loss: 5.8998
Epoch 160/500, Loss: 5.8935
Epoch 161/500, Loss: 5.8862
Epoch 162/500, Loss: 5.8799
Epoch 163/500, Loss: 5.8727
Epoch 164/500, Loss: 5.8673
Epoch 165/500, Loss: 5.8595
Epoch 166/500, Loss: 5.8540
Epoch 167/500, Loss: 5.8460
Epoch 168/500, Loss: 5.8405
Epoch 169/500, Loss: 5.8328
Epoch 170/500, Loss: 5.8278
Epoch 171/500, Loss: 5.8194
Epoch 172/500, Loss: 5.8159
Epoch 173/500, Loss: 5.8087
Epoch 174/500, Loss: 5.8011
Epoch 175/500, Loss: 5.7945
Epoch 176/500, Loss: 5.7897
Epoch 177/500, Loss: 5.7834
Epoch 178/500, Loss: 5.7748
Epoch 179/500, Loss: 5.7701
Epoch 180/500, Loss: 5.7621
Epoch 181/500, Loss: 5.7586
Epoch 182/500, Loss: 5.7515
Epoch 183/500, Loss: 5.7426
Epoch 184/500, Loss: 5.7382
Epoch 185/500, Loss: 5.7301
Epoch 186/500, Loss: 5.7249
Epoch 187/500, Loss: 5.7165
Epoch 188/500, Loss: 5.7118
Epoch 189/500, Loss: 5.7042
Epoch 190/500, Loss: 5.6969
Epoch 191/500, Loss: 5.6916
Epoch 192/500, Loss: 5.6836
Epoch 193/500, Loss: 5.6790
Epoch 194/500, Loss: 5.6699
Epoch 195/500, Loss: 5.6653
Epoch 196/500, Loss: 5.6584
Epoch 197/500, Loss: 5.6511
Epoch 198/500, Loss: 5.6476
Epoch 199/500, Loss: 5.6388
Epoch 200/500, Loss: 5.6354
Epoch 201/500, Loss: 5.6268
Epoch 202/500, Loss: 5.6211
Epoch 203/500, Loss: 5.6145
Epoch 204/500, Loss: 5.6094
Epoch 205/500, Loss: 5.6006
Epoch 206/500, Loss: 5.5967
Epoch 207/500, Loss: 5.5900
Epoch 208/500, Loss: 5.5822
Epoch 209/500, Loss: 5.5770
Epoch 210/500, Loss: 5.5698
Epoch 211/500, Loss: 5.5644
Epoch 212/500, Loss: 5.5561
Epoch 213/500, Loss: 5.5518
Epoch 214/500, Loss: 5.5444
Epoch 215/500, Loss: 5.5366
Epoch 216/500, Loss: 5.5314
Epoch 217/500, Loss: 5.5268
Epoch 218/500, Loss: 5.5187
Epoch 219/500, Loss: 5.5131
Epoch 220/500, Loss: 5.5068
Epoch 221/500, Loss: 5.5014
Epoch 222/500, Loss: 5.4941
Epoch 223/500, Loss: 5.4913
Epoch 224/500, Loss: 5.4829
Epoch 225/500, Loss: 5.4784
Epoch 226/500, Loss: 5.4715
Epoch 227/500, Loss: 5.4671
Epoch 228/500, Loss: 5.4601
Epoch 229/500, Loss: 5.4572
Epoch 230/500, Loss: 5.4490
Epoch 231/500, Loss: 5.4446
Epoch 232/500, Loss: 5.4384
Epoch 233/500, Loss: 5.4348
Epoch 234/500, Loss: 5.4285
Epoch 235/500, Loss: 5.4223
Epoch 236/500, Loss: 5.4176
Epoch 237/500, Loss: 5.4119
Epoch 238/500, Loss: 5.4079
Epoch 239/500, Loss: 5.4014
Epoch 240/500, Loss: 5.3977
Epoch 241/500, Loss: 5.3904
Epoch 242/500, Loss: 5.3862
Epoch 243/500, Loss: 5.3814
Epoch 244/500, Loss: 5.3757
Epoch 245/500, Loss: 5.3704
Epoch 246/500, Loss: 5.3649
Epoch 247/500, Loss: 5.3605
Epoch 248/500, Loss: 5.3544
Epoch 249/500, Loss: 5.3508
Epoch 250/500, Loss: 5.3437
Epoch 251/500, Loss: 5.3401
Epoch 252/500, Loss: 5.3322
Epoch 253/500, Loss: 5.3285
Epoch 254/500, Loss: 5.3220
Epoch 255/500, Loss: 5.3158
Epoch 256/500, Loss: 5.3105
Epoch 257/500, Loss: 5.3039
Epoch 258/500, Loss: 5.2994
Epoch 259/500, Loss: 5.2937
Epoch 260/500, Loss: 5.2889
Epoch 261/500, Loss: 5.2810
Epoch 262/500, Loss: 5.2789
Epoch 263/500, Loss: 5.2728
Epoch 264/500, Loss: 5.2654
Epoch 265/500, Loss: 5.2600
Epoch 266/500, Loss: 5.2539
Epoch 267/500, Loss: 5.2494
Epoch 268/500, Loss: 5.2418
Epoch 269/500, Loss: 5.2374
Epoch 270/500, Loss: 5.2297
Epoch 271/500, Loss: 5.2260
Epoch 272/500, Loss: 5.2195
Epoch 273/500, Loss: 5.2145
Epoch 274/500, Loss: 5.2074
Epoch 275/500, Loss: 5.2024
Epoch 276/500, Loss: 5.1976
Epoch 277/500, Loss: 5.1900
Epoch 278/500, Loss: 5.1856
Epoch 279/500, Loss: 5.1795
Epoch 280/500, Loss: 5.1757
Epoch 281/500, Loss: 5.1690
Epoch 282/500, Loss: 5.1647
Epoch 283/500, Loss: 5.1580
Epoch 284/500, Loss: 5.1540
Epoch 285/500, Loss: 5.1486
Epoch 286/500, Loss: 5.1452
Epoch 287/500, Loss: 5.1385
Epoch 288/500, Loss: 5.1349
Epoch 289/500, Loss: 5.1301
Epoch 290/500, Loss: 5.1254
Epoch 291/500, Loss: 5.1208
Epoch 292/500, Loss: 5.1149
Epoch 293/500, Loss: 5.1120
Epoch 294/500, Loss: 5.1068
Epoch 295/500, Loss: 5.1030
Epoch 296/500, Loss: 5.0981
Epoch 297/500, Loss: 5.0925
Epoch 298/500, Loss: 5.0896
Epoch 299/500, Loss: 5.0844
Epoch 300/500, Loss: 5.0810
Epoch 301/500, Loss: 5.0757
Epoch 302/500, Loss: 5.0706
Epoch 303/500, Loss: 5.0670
Epoch 304/500, Loss: 5.0618
Epoch 305/500, Loss: 5.0584
Epoch 306/500, Loss: 5.0533
Epoch 307/500, Loss: 5.0499
Epoch 308/500, Loss: 5.0440
Epoch 309/500, Loss: 5.0412
Epoch 310/500, Loss: 5.0359
Epoch 311/500, Loss: 5.0297
Epoch 312/500, Loss: 5.0271
Epoch 313/500, Loss: 5.0206
Epoch 314/500, Loss: 5.0179
Epoch 315/500, Loss: 5.0127
Epoch 316/500, Loss: 5.0063
Epoch 317/500, Loss: 5.0025
Epoch 318/500, Loss: 4.9961
Epoch 319/500, Loss: 4.9925
Epoch 320/500, Loss: 4.9870
Epoch 321/500, Loss: 4.9816
Epoch 322/500, Loss: 4.9774
Epoch 323/500, Loss: 4.9718
Epoch 324/500, Loss: 4.9690
Epoch 325/500, Loss: 4.9634
Epoch 326/500, Loss: 4.9600
Epoch 327/500, Loss: 4.9557
Epoch 328/500, Loss: 4.9497
Epoch 329/500, Loss: 4.9470
Epoch 330/500, Loss: 4.9420
Epoch 331/500, Loss: 4.9392
Epoch 332/500, Loss: 4.9343
Epoch 333/500, Loss: 4.9289
Epoch 334/500, Loss: 4.9265
Epoch 335/500, Loss: 4.9225
Epoch 336/500, Loss: 4.9191
Epoch 337/500, Loss: 4.9143
Epoch 338/500, Loss: 4.9098
Epoch 339/500, Loss: 4.9061
Epoch 340/500, Loss: 4.9012
Epoch 341/500, Loss: 4.8987
Epoch 342/500, Loss: 4.8925
Epoch 343/500, Loss: 4.8909
Epoch 344/500, Loss: 4.8861
Epoch 345/500, Loss: 4.8809
Epoch 346/500, Loss: 4.8776
Epoch 347/500, Loss: 4.8720
Epoch 348/500, Loss: 4.8688
Epoch 349/500, Loss: 4.8648
Epoch 350/500, Loss: 4.8588
Epoch 351/500, Loss: 4.8551
Epoch 352/500, Loss: 4.8507
Epoch 353/500, Loss: 4.8480
Epoch 354/500, Loss: 4.8435
Epoch 355/500, Loss: 4.8379
Epoch 356/500, Loss: 4.8354
Epoch 357/500, Loss: 4.8316
Epoch 358/500, Loss: 4.8261
Epoch 359/500, Loss: 4.8241
Epoch 360/500, Loss: 4.8184
Epoch 361/500, Loss: 4.8157
Epoch 362/500, Loss: 4.8125
Epoch 363/500, Loss: 4.8074
Epoch 364/500, Loss: 4.8043
Epoch 365/500, Loss: 4.7990
Epoch 366/500, Loss: 4.7977
Epoch 367/500, Loss: 4.7932
Epoch 368/500, Loss: 4.7878
Epoch 369/500, Loss: 4.7859
Epoch 370/500, Loss: 4.7827
Epoch 371/500, Loss: 4.7775
Epoch 372/500, Loss: 4.7755
Epoch 373/500, Loss: 4.7704
Epoch 374/500, Loss: 4.7683
Epoch 375/500, Loss: 4.7643
Epoch 376/500, Loss: 4.7599
Epoch 377/500, Loss: 4.7584
Epoch 378/500, Loss: 4.7536
Epoch 379/500, Loss: 4.7489
Epoch 380/500, Loss: 4.7471
Epoch 381/500, Loss: 4.7434
Epoch 382/500, Loss: 4.7381
Epoch 383/500, Loss: 4.7366
Epoch 384/500, Loss: 4.7324
Epoch 385/500, Loss: 4.7282
Epoch 386/500, Loss: 4.7255
Epoch 387/500, Loss: 4.7224
Epoch 388/500, Loss: 4.7180
Epoch 389/500, Loss: 4.7157
Epoch 390/500, Loss: 4.7106
Epoch 391/500, Loss: 4.7096
Epoch 392/500, Loss: 4.7055
Epoch 393/500, Loss: 4.7005
Epoch 394/500, Loss: 4.6988
Epoch 395/500, Loss: 4.6958
Epoch 396/500, Loss: 4.6909
Epoch 397/500, Loss: 4.6896
Epoch 398/500, Loss: 4.6861
Epoch 399/500, Loss: 4.6805
Epoch 400/500, Loss: 4.6785
Epoch 401/500, Loss: 4.6765
Epoch 402/500, Loss: 4.6718
Epoch 403/500, Loss: 4.6694
Epoch 404/500, Loss: 4.6659
Epoch 405/500, Loss: 4.6616
Epoch 406/500, Loss: 4.6601
Epoch 407/500, Loss: 4.6558
Epoch 408/500, Loss: 4.6520
Epoch 409/500, Loss: 4.6503
Epoch 410/500, Loss: 4.6458
Epoch 411/500, Loss: 4.6415
Epoch 412/500, Loss: 4.6393
Epoch 413/500, Loss: 4.6360
Epoch 414/500, Loss: 4.6319
Epoch 415/500, Loss: 4.6295
Epoch 416/500, Loss: 4.6258
Epoch 417/500, Loss: 4.6210
Epoch 418/500, Loss: 4.6195
Epoch 419/500, Loss: 4.6164
Epoch 420/500, Loss: 4.6110
Epoch 421/500, Loss: 4.6090
Epoch 422/500, Loss: 4.6056
Epoch 423/500, Loss: 4.6016
Epoch 424/500, Loss: 4.5987
Epoch 425/500, Loss: 4.5957
Epoch 426/500, Loss: 4.5912
Epoch 427/500, Loss: 4.5901
Epoch 428/500, Loss: 4.5860
Epoch 429/500, Loss: 4.5819
Epoch 430/500, Loss: 4.5790
Epoch 431/500, Loss: 4.5764
Epoch 432/500, Loss: 4.5725
Epoch 433/500, Loss: 4.5704
Epoch 434/500, Loss: 4.5670
Epoch 435/500, Loss: 4.5631
Epoch 436/500, Loss: 4.5614
Epoch 437/500, Loss: 4.5584
Epoch 438/500, Loss: 4.5543
Epoch 439/500, Loss: 4.5525
Epoch 440/500, Loss: 4.5480
Epoch 441/500, Loss: 4.5468
Epoch 442/500, Loss: 4.5425
Epoch 443/500, Loss: 4.5391
Epoch 444/500, Loss: 4.5377
Epoch 445/500, Loss: 4.5347
Epoch 446/500, Loss: 4.5304
Epoch 447/500, Loss: 4.5291
Epoch 448/500, Loss: 4.5257
Epoch 449/500, Loss: 4.5212
Epoch 450/500, Loss: 4.5206
Epoch 451/500, Loss: 4.5174
Epoch 452/500, Loss: 4.5135
Epoch 453/500, Loss: 4.5117
Epoch 454/500, Loss: 4.5086
Epoch 455/500, Loss: 4.5052
Epoch 456/500, Loss: 4.5027
Epoch 457/500, Loss: 4.4995
Epoch 458/500, Loss: 4.4963
Epoch 459/500, Loss: 4.4940
Epoch 460/500, Loss: 4.4895
Epoch 461/500, Loss: 4.4880
Epoch 462/500, Loss: 4.4848
Epoch 463/500, Loss: 4.4806
Epoch 464/500, Loss: 4.4787
Epoch 465/500, Loss: 4.4740
Epoch 466/500, Loss: 4.4733
Epoch 467/500, Loss: 4.4693
Epoch 468/500, Loss: 4.4659
Epoch 469/500, Loss: 4.4641
Epoch 470/500, Loss: 4.4593
Epoch 471/500, Loss: 4.4580
Epoch 472/500, Loss: 4.4546
Epoch 473/500, Loss: 4.4510
Epoch 474/500, Loss: 4.4487
Epoch 475/500, Loss: 4.4448
Epoch 476/500, Loss: 4.4432
Epoch 477/500, Loss: 4.4394
Epoch 478/500, Loss: 4.4366
Epoch 479/500, Loss: 4.4344
Epoch 480/500, Loss: 4.4296
Epoch 481/500, Loss: 4.4279
Epoch 482/500, Loss: 4.4243
Epoch 483/500, Loss: 4.4202
Epoch 484/500, Loss: 4.4172
Epoch 485/500, Loss: 4.4134
Epoch 486/500, Loss: 4.4121
Epoch 487/500, Loss: 4.4074
Epoch 488/500, Loss: 4.4039
Epoch 489/500, Loss: 4.4017
Epoch 490/500, Loss: 4.3980
Epoch 491/500, Loss: 4.3955
Epoch 492/500, Loss: 4.3911
Epoch 493/500, Loss: 4.3900
Epoch 494/500, Loss: 4.3859
Epoch 495/500, Loss: 4.3812
Epoch 496/500, Loss: 4.3805
Epoch 497/500, Loss: 4.3768
Epoch 498/500, Loss: 4.3754
Epoch 499/500, Loss: 4.3711
Epoch 500/500, Loss: 4.3695

10. Modelo de evaluación

test_loss, test_rmse, test_mae = evaluate(model, criterion, X_test, y_test)
print(f"Test Loss: {
      
      test_loss:.4f}, Test RMSE: {
      
      test_rmse:.4f}, Test MAE: {
      
      test_mae:.4f}")

Pérdida de prueba: 12.0768, Prueba RMSE: 3.4752, Prueba MAE: 2.2279

Supongo que te gusta

Origin blog.csdn.net/programmer589/article/details/132136673
Recomendado
Clasificación