Huawei Cloud Yaoyun Server L instance evaluation|Train the handwritten digit recognition model on the server and deploy it to the server for remote calling

Table of contents

Overview of this article

about the author 

Step 1: Purchase a server and log in to the server remotely 

Step 2: Configure the environment and train the handwritten digit recognition network 

Step 3: Deploy the handwritten digit recognition network to Yunyao Cloud Server L instance

Step 4: Start the local client and perform handwritten digit recognition


Overview of this article

Huawei Cloud Yaoyun Server L instance is a lightweight server that is novice-friendly, ready to use, and easy to deploy . Today, the author uses Huawei's new Yunyao Cloud Server L instance for the entire activity - uses the Yunyao Cloud Server L instance to train the handwritten digit recognition neural network, and deploys the model on the Yunyao Cloud Server L instance to implement remote calls Digital identification services. The test set accuracy is 99.3%, and neural network code, server code, and client code are provided

 Show results

about the author 

The author himself is an artificial intelligence alchemist. Currently, his main research direction in the laboratory is generative models. He also has some knowledge of other directions. He hopes to communicate and share with friends who are also interested in artificial intelligence on the CSDN platform. progress. Thank you everyone~~~

 如果你觉得这篇文章对您有帮助,麻烦点赞、收藏或者评论一下,这是对作者工作的肯定和鼓励。   

Step 1: Purchase a server and log in to the server remotely 

Purchase link:  Yunyao Cloud Server L instance_[Latest]_Lightweight Cloud Server_Lightweight Server_Lightweight Application Server-Huawei Cloud

The image we choose here is the system image of ubuntu22.04 

Then we use shell to remotely log in to the Yunyao cloud server

If you don’t have the tool, you can read my other article: Shell and Xftp free version tool download

Step 2: Configure the environment and train the handwritten digit recognition network 

First we need to use the following command to install Anaconda, which will be used to configure the operating environment later.

wget https://repo.anaconda.com/archive/Anaconda3-2023.07-1-Linux-x86_64.sh

Wait for the download to complete~

bash Anaconda3-2023.07-1-Linux-x86_64.sh

Then we use the above command to run the Anaconda installer. The specific installation process is skipped due to space reasons.

Don't worry about the failure here. Just close the shell and restart the session window.

Then we use the following command to create a conda environment to install the libraries used later.

conda create -n dl python=3.8

 After installation, use the following command to enter the conda environment we just created

conda activate dl

 Then we use the following command to install a CPU version of pytorch

conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cpuonly -c pytorch

The installation is complete. Next, we upload the neural network code for handwritten digit recognition to the Yunyao Cloud Server L instance.

import torch
import numpy as np
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision import datasets
import os
import random
import time

transform = transforms.Compose([transforms.ToTensor(),
                                transforms.Normalize((0.1307,), (0.3081,))
                                ])


def seed_torch(seed):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)  # 为了禁止hash随机化,使得实验可复现
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True



class Network(torch.nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv = torch.nn.Sequential(
            torch.nn.Conv2d(1, 16, kernel_size=5),
            torch.nn.ReLU(),
            torch.nn.MaxPool2d(kernel_size=2),
            torch.nn.Conv2d(16, 32, kernel_size=5),
            torch.nn.ReLU(),
            torch.nn.MaxPool2d(kernel_size=2),
        )

        self.fc = torch.nn.Sequential(
            torch.nn.Linear(512, 128),
            torch.nn.Dropout(0.1),
            torch.nn.Linear(128, 10),
        )

    def forward(self, x):
        x = self.conv(x)
        x = x.view(-1, 512)
        x = self.fc(x)
        return x


def train(epoch,batch_size,learning_rate):
    train_dataset = datasets.MNIST(root='/root/MNIST/data', train=True, download=True, transform=transform)
    test_dataset = datasets.MNIST(root='/root/MNIST/data', train=False, download=True, transform=transform)
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
    model = Network()
    criterion = torch.nn.CrossEntropyLoss()  # 交叉熵损失
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  # lr学习率,momentum冲量
    scheduler=torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.7)

    running_loss = 0.0  # 这整个epoch的loss清零
    train_total = 0
    train_correct = 0
    for ep in range(epoch):
        time_start=time.time()
        model.train()
        for batch_idx, data in enumerate(train_loader, 0):
            inputs, target = data
            optimizer.zero_grad()
            # forward + backward + update
            outputs = model(inputs)
            loss = criterion(outputs, target)
            loss.backward()
            optimizer.step()
            # 把运行中的loss累加起来,为了下面300次一除
            # 把运行中的准确率acc算出来
            _, predicted = torch.max(outputs.data, dim=1)
            train_total += inputs.shape[0]
            train_correct += (predicted == target).sum().item()
        time_end = time.time()
        train_acc=100 * train_correct / train_total
        print('[%d/ %d]: train acc: %.2f %% time:%.2f s'% (ep + 1, epoch,train_acc,(time_end-time_start)))
        train_total = 0
        train_correct = 0
        scheduler.step()

        correct = 0
        total = 0
        model.eval()
        with torch.no_grad():  # 测试集不用算梯度
            for data in test_loader:
                images, labels = data
                outputs = model(images)
                _, predicted = torch.max(outputs.data, dim=1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()
        test_acc = 100*correct / total
        print('(%d / %d): test acc: %.1f %% ' % (ep+1, epoch, test_acc))  # 求测试的准确率,正确数/总数
        state_dict = {"net": model.state_dict(), "optimizer": optimizer.state_dict(), "epoch": epoch,
                      "lr": optimizer.param_groups[0]['lr']}
        if not os.path.isdir('/root//MNIST/model/'):
            os.makedirs('/root//MNIST/model/')

        torch.save(state_dict,
                   '/root//MNIST/model/' + f"model_{ep}_{train_acc}%_{test_acc}%.pth")


if __name__ == '__main__':
    batch_size = 64
    learning_rate = 0.001
    epoch = 10
    seed_torch(77)#固定随机种子,保证结果可复现
    train(epoch,batch_size,learning_rate)#开始训练

Copy the code and save it as MNIST_train.py 

 Next, use xftp software to remotely connect to the Yunyao Cloud Server L instance and create a new MNIST folder in the root directory.

Then we upload the MNIST_train.py file just now to the MNIST folder 

 Then we use the following command to start training the handwritten digit recognition neural network

python /root/MNIST/MNIST_train.py

As shown in the picture, we have completed the training of the handwritten digit recognition neural network. From the picture, we can see that each round of training takes about 19.4S. This shows that the CPU of our Yunyao Cloud Server L instance is still very powerful! ! ! At the same time, our accuracy in the MNIST test set also reached 99.3% .

 Step 3: Deploy the handwritten digit recognition network to Yunyao Cloud Server L instance

 First copy and save the following server code as MNIST_server.py and then upload it to the Yunyao Cloud Server L instance /root/MNIST directory

import io
import torch
from torchvision import transforms
from PIL import Image
from flask import Flask, jsonify, request
from flask_cors import CORS
from MNIST_train import Network


app = Flask(__name__)
CORS(app, resources=r'/*')


model = Network()
checkpoint = torch.load("/root/MNIST/model/model_10_99.77666666666667%_99.3%.pth",
                        map_location='cpu')
model.load_state_dict(checkpoint['net'])
model.eval()


def transform(image_bytes):
    Transforms = transforms.Compose([transforms.Resize(28),
                                        transforms.ToTensor(),
                                        transforms.Normalize((0.1307,), (0.3081,))])
    image = Image.open(io.BytesIO(image_bytes))
    return Transforms(image)


def get_prediction(image_bytes):
    image = transform(image_bytes=image_bytes)
    image=image.reshape(1,1,28,28)
    outputs = model(image)
    _, predicted = outputs.max(1)
    predicted_idx = str(predicted.item())
    return predicted_idx


@app.route('/predict')
def predict():
    if request.method == 'GET':
        file = request.files['file']
        img_bytes = file.read()
        predict_id= get_prediction(image_bytes=img_bytes)
        return jsonify({'predict_id': predict_id})


if __name__ == '__main__':
    app.run(host='0.0.0.0',port=3777)

 After uploading, it will look like the picture below

Next, in order to enable the client to communicate with the server, we need to open port 3777 of the Yunyao Cloud Server L instance (this port can be the same as the one on the server) 

This means that port 3777 is successfully opened.

Then, use the following commands to install the flask, flask_cors and screen libraries in the Yunyao Cloud Server L instance

pip install flask -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install flask_cors -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install screen -i https://pypi.tuna.tsinghua.edu.cn/simple

 Then use the following command to create a screen window. This allows you to end remote access to the Yunyao Cloud Server L instance , so that your Yunyao Cloud Server L instance process can still run without interruption.

screen -S mnist

As shown in the picture above, we have entered a screen window called mnist. Then we need to re-enter our previous conda environment and use the following command to run the handwritten number recognition server program

python /root/MNIST/MNIST_server.py

As shown below, our server is running successfully 

 Step 4: Start the local client and perform handwritten digit recognition

The code for the handwritten digit recognition client is as follows

import tkinter as tk
from tkinter import filedialog, ttk
import requests
from PIL import Image, ImageTk
from ttkthemes import ThemedStyle



# 创建上传图像的函数
def upload_image():
    file_path = filedialog.askopenfilename(filetypes=[("Image files", "*.jpg")])
    print(file_path)

    if file_path:
        # 打开图像并显示在GUI中
        image = Image.open(file_path)
        image = image.resize((200, 200), Image.ANTIALIAS)
        photo = ImageTk.PhotoImage(image=image)
        image_label.config(image=photo)
        image_label.image = photo

        with open(file_path, 'rb') as image_file:
            files = {'file': (image_file.name, image_file, 'image/jpeg')}
            response = requests.get(f'{server_url}/predict', files=files)

        if response.status_code == 200:
            result = response.json()
            predict_id = result.get('predict_id')
            result_label.config(text=f'该图像的数字为: {predict_id}')
        else:
            result_label.config(text='请求失败,请检查网络是否正常。')

# 定义服务器地址和端口
server_url = 'http://120.46.178.145:3777'  # 请根据您的实际服务器地址和端口进行修改

# 创建GUI窗口
root = tk.Tk()
root.title("MNIST 图像分类器")
root.geometry("300x350")
style = ThemedStyle(root)
style.set_theme("plastik")
title_label = ttk.Label(root, text="手写数字识别", font=("Helvetica", 16))
title_label.pack(pady=10)
image_label = ttk.Label(root)
image_label.pack()

# 上传按钮
upload_button = ttk.Button(root, text="上传图像", command=upload_image)
upload_button.pack(pady=10)

# 预测结果的标签
result_label = ttk.Label(root, text="", font=("Helvetica", 12))
result_label.pack()


root.mainloop()

 After we run it, the following interface will appear

After selecting the image and uploading it, the predicted results will appear 

At this time, our Yunyao Cloud Server L instance will also prompt status code 200 

 

At this point, our entire project is complete~~~~~~~~~~~~ 

 In order to facilitate everyone's testing, I also uploaded the picture version of the MNIST data set to the network disk to share it~

Link: Please enter the extraction code for Baidu Cloud Disk. 
Extraction code: gskb 
-- Sharing from Baidu Cloud Disk Super Member V4

Summarize

It is my first time to experience Huawei Cloud Yaoyun Server L instance . Overall, it is very good to use. It has strong performance and easy operation. It is very suitable for novices to get started. It is highly recommended that everyone should try it. PS: More teaching articles related to Yunyao Cloud Server will be published in the future~.

 如果您觉得这篇文章对您有帮忙,请点赞、收藏。您的点赞是对作者工作的肯定和鼓励,这对作者来说真的非常重要。如果您对文章内容有任何疑惑和建议,欢迎在评论区里面进行评论,我将第一时间进行回复。 

Guess you like

Origin blog.csdn.net/qq_35768355/article/details/133153252