Kubeflow 0.1 released, Kubernetes-based machine learning tool library

  

Google has released version 0.1 of its Kubeflow open source tool, which aims to bring machine learning to the world of Kubernetes containers. The idea behind this project is to allow data scientists to take full advantage of running machine learning tasks on a Kubernetes cluster. Kubeflow allows machine learning teams to easily run existing tasks into clusters without much change.

With the release of version 0.1, the project will begin to advance, and milestones will be announced via blogs, bringing stability to a new level, while adding a ton of new features that the community has been calling for. These include providing Jupyter Hub with collaborative and interactive training for machine learning tasks, Tensorflow training and hosting support, and more.

Introduction to Kubeflow 0.1

Kubeflow 0.1 provides a minimal set of software development kits for developing, training and deploying ML. With just a few commands, you can get:

  • Jupyter Hub: Collaborative and Interactive Training for Machine Learning Tasks

  • TensorFlow training controller: supports native distributed training

  • TensorFlow Serving: for serving hosting

  • Argo: Workflow

  • SeldonCore: for complex inference and non-TF models

  • Ambassador: reverse proxy

  • Wiring: Make Kubeflow run on any Kubernetes

Here's an example to get started:

# Create namespace for kubeflow development environment
NAMESPACE=kubeflow
kubectl create namespace ${NAMESPACE}
VERSION=v0.1.3
# #Initialize the ksonnet application, set namespace as its default environment variable
APP_NAME=my-kubeflow
ks init ${APP_NAME}
cd ${APP_NAME}
ks env set default --namespace ${NAMESPACE}
##Install Kubeflow components
ks registry add kubeflow github.com/kubeflow/kubeflow/tree/${VERSION}/kubeflow
ks pkg install kubeflow/core@${VERSION}
ks pkg install kubeflow/tf-serving@${VERSION}
ks pkg install kubeflow/tf-job@${VERSION}
## Create templates for core components
ks generate kubeflow-core kubeflow-core
##Deploy Kubeflow
ks apply default -c kubeflow-core

At this point, JupyterHub is deployed and we can now start developing models using Jupyter. Once we have the Python code to build the model, we can build a Docker image and use the TFJob operator to train the model by running:

ks generate tf-job my-tf-job --name=my-tf-job --image=gcr.io/my/image:latest
ks apply default -c my-tf-job
# #Deploy the model below
ks generate tf-serving ${MODEL_COMPONENT} --name=${MODEL_NAME}
ks param set ${MODEL_COMPONENT} modelPath ${MODEL_PATH}
ks apply ${ENV} -c ${MODEL_COMPONENT}

With just a few commands, data scientists and software engineers can create more complex ML solutions and focus on what they do best: solving core business problems.

From: Docker WeChat Official Account  
Original text: https://kubernetes.io/blog/2018/05/04/announcing-kubeflow-0.1/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325616631&siteId=291194637