5. Actual combat: CIFAR-10 classification

What is different from other tutorials is the loaded local downloaded data (the download speed is too slow in the code). For a description of the data set, click this link .

1. Download the data set, copy this link to Thunder download http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz

2. Unzip to E: / data directory,

3. Code in jupyter

【Description】

① Dataloader is an iterable object, which stitches each data sample returned by datasets into a batch, and can speed up optimization, data scrambling and other operations with multiple threads.

  After all the data in the datasets has been traversed once, iterate over the Dataloader.

② Dataloader type returns data and label index like datasets, but Dataloader returns batch as a unit, which is a vector of batch_size elements.

【step】

1. Use torchvision to load and preprocess the data set
2. Define the network
3. Define the loss function and optimizer
4. Train the network and update the network parameters
5. Test the network

# ####################################### 1. Use torchvision to load and preprocess the dataset 
Import Torch T AS
 Import torchvision TV AS
 Import torchvision.transforms AS TRANSFORMS
 from torchvision.transforms Import ToPILImage 

# define the data preprocessing 
Transform = transforms.Compose ([ 
    transforms.ToTensor (), # into the Tensor 
    transforms.Normalize ((0.5 , 0.5,0.5), (0.5,0.5,0.5))]) # normalized to [-1, 1] 
# training set 
# torchvision output datasets are PILImage Images type, the pixel value range [0, 1]. 
# Will be converted into normalized in the range [-1, 1] Tensors type 
trainset =tv.datasets.CIFAR10 ( 
    root = " E: / the Data " , # pay attention to my path 
    Train = True, 
    download = False, # careful not to download 
    the Transform = the Transform) # when loading data, converted to a Tensor type, and return one of the [-1,1] 
trainloader = t.utils.data.DataLoader ( 
    trainset, 
    the batch_size =. 4, # BATCH size 
    shuffle = True, # disrupted 
    num_workers 2 =) # 2 threads 
# test set 
testset - = TV. datasets.CIFAR10 ( 
    root = " E: / data ",
    train=False,
    download=False,
    transform=transform)
testloader=t.utils.data.DataLoader(
    testset,
    batch_size=4,
    shuffle=False,
    num_workers=2)
#类别标签
classes=("plane","car","bird","cat","deer","dog","frog","horse","ship","truck")

Display the first picture in the datasets

(Data, label) = trainset [0] # Get the first element of the data set, return data and the tag index 
Print (classes [label]) # label is. 6 
Show = ToPILImage () # The Tensor transformed into Image, easy visualization 
show ((data + 1) / 2) .resize ((100,100)) # (data + 1) / 2 denormalizes [-1,1] to [0,1], resize turns the 32 × 32 graph into 100 × 100

Display the first batch of pictures in Dataloader (4 pictures for 1 batch)

# Random training set to obtain four pictures (trainloader in four elements as a group (batch)) 
dataiter = iter (trainloader) 
(ImagesRF Royalty Free, Labels) = dataiter.next () # to batch as a unit, Dataloader return the same type of datasets data, a tab index 
Print (labels)
 # corresponding to the output tag 
Print ( '  ' .join ( ' % S ' % classes [labels [J]] for J in Range (. 4 )))
 # display images 
show (tv.utils. make_grid ((+ Images. 1) / 2)). a resize ((400, 100)) # display of FIG. 4 
# Show ((Images [0] + 1'd) / 2) .resize ((100,100)) # display the first FIG.

 

Guess you like

Origin www.cnblogs.com/xixixing/p/12752024.html