Quickly build Serverless AI Lab on a cloud

Serverless Kubernetes and ACK virtual nodes are GPU-based ECI provides container instance, allowing users to quickly build low-cost serverless AI lab in the cloud, users do not need to maintain server operating environment and GPU basis, greatly reducing the burden on AI platform operation and maintenance, significantly enhance the overall computational efficiency.

Examples of how to use the GPU container

In the annotation pod of the type specified in the GPU (P4 / P100 / V100, etc.) desired, while the number specified in the GPU to create a container instance resource.limits GPU. Each pod exclusive GPU, does not support vGPU, the same charges and ECS GPU type of GPU Instances of charge, no additional cost, the current ECI offers a variety of specifications of the GPU type. (Please refer to https://help.aliyun.com/document_detail/114581.html )

Examples

1. Create a Serverless Kubernetes cluster

Select the Shenzhen area, the available area D.
image
image

image

2. Create a container instance GPU

We use tensorflow model the following picture identification:
image

Use the template to create a pod, which select P100 GPU specifications. The script in the pod will download these image files, and identifies calculated according to the model.
image

image

apiVersion: v1
kind: Pod
metadata:
  name: tensorflow
  annotations:
    k8s.aliyun.com/eci-gpu-type : "P100"
spec:
  containers:
  - image: registry-vpc.cn-shenzhen.aliyuncs.com/ack-serverless/tensorflow
    name: tensorflow
    command:
    - "sh"
    - "-c"
    - "python models/tutorials/image/imagenet/classify_image.py"
    resources:
      limits:
        nvidia.com/gpu: "1"
  restartPolicy: OnFailure

After the pod it will be deployed in a pending state:
image

After waiting for several tens of seconds pod state to Running, After completion of calculation becomes Terminated state.
image

From the pod log we can see the pod can identify the P100 GPU hardware, and can correctly identify the picture for Panda.
image

to sum up

As can be seen by the above example, to set up the environment from the end of the calculation, the whole process users do not need to purchase and manage server, without having to install GPU operating environment, serverless way allows users to focus on building the AI ​​model, instead of managing the underlying infrastructure and maintain.

Guess you like

Origin yq.aliyun.com/articles/705664