Step by Step! Kubernetes continued Deployment Guide

This paper is through hands, starting from zero base, step by step, summed up Kubernetes continuous deployment workflow. Articles prepared from the pre-start tool, engraved into the repository, test, build a mirror, and finally build a pipeline for deployment, all work processes are unfolding in the article, for users want to have automatic continuous delivery pipeline will have major implications.
 
Step by Step!  Kubernetes continued Deployment Guide

 

 
Long, long ago in a job, my task is to switch to the old-fashioned LAMP stack Kubernetes. My boss had never been seen always pursue new technology that only a few days time to complete the iteration of old and new technology - even in view of the time we knew nothing about the working mechanism of the container, so the idea really have to say the boss very bold.

 

After reading official documents and search a lot of information, we begin to feel overwhelmed - there are many new concepts to learn: pod, container and replica and so on. For me, Kubernetes seem clever just for a group of developers and design.

 

Then I did what I always do in this situation do: learning by doing. Can well understand the complex issues through a simple example, I myself step by step to complete the entire deployment process.

 

Finally, we did it, though far from the specified week - we spent nearly a month to create three clusters, including their development, testing and production.

 

In this paper I will describe in detail how to deploy the application to Kubernetes. After reading this article, you will have an efficient Kubernetes deployment and ongoing delivery workflow.
 

Continuous integration and delivery

 

Continuous integration is built and tested in practice every time the application updates. By a small amount of work, earlier an error is detected and addressed immediately.
 

After the integration is complete and all tests are passed, we will be able to continue to add deliveries to the release process automation and deployment. Use CI / CD projects can be more frequent and more reliable publication.

 

We will use Semaphore, this is a fast, powerful and easy to delivery and continuous integration (CI / CD) platform, which automatically performs all processes:

 

1, the installation project dependencies

2, running unit tests

3, a constructed image Docker

4, Push mirrored Docker Hub

5, a key Kubernetes deployment

 

对于应用程序,我们有一个Ruby Sinatra微服务,它暴露一些HTTP端点。该项目已包含部署所需的所有内容,但仍需要一些组件。
 

准备工作

 

在开始操作之前,你需要登录Github和Semaphore账号。此外,为后续方便拉取或push Docker镜像,你需要登录Docker Hub。

 

接下来,你需要在计算机上安装一些工具:

 

  • Git:处理代码

  • curl:网络的“Swiss Knife”

  • kubectl:远程控制你的集群

 

当然,千万不要忘了Kubernetes。大部分的云供应商都以各种形式提供此服务,选择适合你的需求的即可。最低端的机器配置和集群大小足以运行我们示例的app。我喜欢从3个节点的集群开始,但你可以只用1个节点的集群。
 

集群准备好之后,从你的供应商中下载kubeconfig文件。有些允许你直接从其web控制台下载,有些则需要帮助程序。我们需要此文件才能连接到集群。

 

有了这个,我们已经可以开始了。首先要做的是fork存储库。
 

Fork存储库

 

在这篇文章中fork我们将使用的演示应用程序。

 

  1. 访问semaphore-demo-ruby-kubernetes存储库,并且点击右上方的Fork按钮

  2. 点击Clone or download按钮并且复制地址

  3. 复制存储库:$ git clone https://github.com/your_repository_path

 

使用Semaphore连接新的存储库

 

1、 登录到你的Semaphore

2、 点击侧边栏的链接,创建一个新项目

3、 点击你的存储库旁【Add Repository】按钮
 

使用Semaphore测试

 

持续集成让测试变得有趣并且高效。一个完善的CI 流水线能够创建一个快速反馈回路以在造成任何损失之前发现错误。我们的项目附带一些现成的测试。

 

打开位于.semaphore/semaphore.yml的初始流水线文件,并快速查看。这个流水线描述了Semaphore构建和测试应用程序所应遵循的所有步骤。它从版本和名称开始。
 

version: v1.0
name: CI

 
接下来是agent,它是为job提供动力的虚拟机。我们可以从3种类型中选择:
 

agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

 
Block(块)、任务以及job定义了在流水线的每个步骤中要执行的操作。在Semaphore,block按照顺序运行,与此同时,在block中的job也会并行运行。流水线包含2个block,一个是用于库安装,一个用于运行测试。
 
Step by Step!  Kubernetes continued Deployment Guide
 
第一个block下载并安装了Ruby gems。
 

- name: Install dependencies
  task:
    jobs:
      - name: bundle install
        commands:
          - checkout
          - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
          - bundle install --deployment --path .bundle
          - cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle

 
Checkout复制了Github里的代码。既然每个job都在完全隔离的机器里运行,那么我们必须依赖缓存(cache)来在job运行之间存储和检索文件。
 


blocks:
  - name: Install dependencies
    task:
      jobs:
        - name: bundle install
          commands:
            - checkout
            - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
            - bundle install --deployment --path .bundle
            - cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle

 
第二个block进行测试。请注意我们重复使用了checkout和cache的代码以将初始文件放入job中。最后一个命令用于启动RSpec测试套件。
 

- name: Tests
  task:
    jobs:
      - name: rspec
        commands:
          - checkout
          - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
          - bundle install --deployment --path .bundle
          - bundle exec rspec

 
最后一个部分我们来看看Promotion。Promotion能够在一定条件下连接流水线以创建复杂的工作流程。所有job完成之后,我们使用 auto_promote_on来启动下一个流水线。
 

promotions:
  - name: Dockerize
    pipeline_file: docker-build.yml
    auto_promote_on:
      - result: passed

 
工作流程继续执行下一个流水线。

 

构建Docker镜像

 

我们可以在Kubernetes上运行任何东西,只要它打包在Docker镜像中。在这一部分,我们将学习如何构建镜像。
 

我们的Docker镜像将包含应用程序的代码、Ruby以及所有的库。让我们先来看一下Dockerfile:
 

FROM ruby:2.5

RUN apt-get update -qq && apt-get install -y build-essential

ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME

ADD Gemfile* $APP_HOME/
RUN bundle install --without development test

ADD . $APP_HOME

EXPOSE 4567

CMD ["bundle", "exec", "rackup", "--host", "0.0.0.0", "-p", "4567"]

 
Dockerfile就像一个详细的菜谱,包含所有构建容器镜像所需要的步骤和命令:

 

1、 从预构建的ruby镜像开始

2、 使用apt-get安装构建工具

3、 复制Gemfile,因为它具有所有的依赖项

4、 用bundle安装它们

5、 复制app的源代码

6、 定义监听端口和启动命令
 

我们将在Semaphore环境中bake我们的生产镜像。然而,如果你想要在计算机上进行一个快速的测试,那么请输入:
 

$ docker build . -t test-image

 
使用Docker运行和暴露内部端口4567以在本地启动服务器:
 

$ docker run -p 4567:4567 test-image

 
你现在可以测试一个可用的HTTP端点:
 

$ curl -w "\n" localhost:4567
hello world :))

 

添加Docker Hub账户到Semaphore

 

Semaphore有一个安全的机制以存储敏感信息,如密码、令牌或密钥等。为了能够push镜像到你的Docker Hub镜像仓库中,你需要使用你的用户名和密码来创建一个Secret:

 

  1. 打开你的Semaphore

  2. 在左侧导航栏中,点击【Secret】

  3. 点击【Creat New Secret】

  4. Secret的名字应该是Dockerhub,键入登录信息(如下图所示),并保存。

 
Step by Step!  Kubernetes continued Deployment Guide
 

构建Docker流水线

 

这个流水线开始构建并且push镜像到Docker Hub,它仅仅有1个block和1个job:
 
Step by Step!  Kubernetes continued Deployment Guide
 
这次,我们需要使用更好的性能,因为Docker往往更加耗费资源。我们选择具有四个CPU,8GB RAM和35GB磁盘空间的中端机器e1-standard-4:
 

version: v1.0
name: Docker build
agent:
  machine:
    type: e1-standard-4
    os_image: ubuntu1804

 
构建block通过登录到Docker Hub启动,用户名和密码可以从我们刚创建的secret导入。登录之后,Docker可以直接访问镜像仓库。

 

下一个命令是docker pull,它试图拉取最新镜像。如果找到镜像,那么Docker可能能够重新使用其中的一些层,以加速构建过程。如果没有最新镜像,也无需担心,只是需要花费长一点的时间来构建。

 

最后,我们push新的镜像。注意,这里我们使用SEMAPHORE_WORKFLOW_ID 变量来标记镜像。
 

blocks:
  - name: Build
    task:
      secrets:
        - name: dockerhub
      jobs:
      - name: Docker build
        commands:
          - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
          - checkout
          - docker pull "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest || true
          - docker build --cache-from "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest -t "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID .
          - docker images
          - docker push "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID

 
当镜像准备完毕,我们进入项目的交付阶段。我们将用手动promotion来扩展我们的Semaphore 流水线。
 

promotions:
  - name: Deploy to Kubernetes
    pipeline_file: deploy-k8s.yml

 
要进行第一次自动构建,请进行push:
 

$ touch test-build
$ git add test-build
$ git commit -m "initial run on Semaphore“
$ git push origin master

 
镜像准备完成之后,我们就可以进入部署阶段。
 

部署到Kubernetes

 

自动部署是Kubernetes的强项。我们所需要做的就是告诉集群我们最终的期望状态,剩下的将由它来负责。
 

然而,在部署之前,你必须将kubeconfig文件上传到Semaphore。
 

上传Kubeconfig到Semaphore

 

我们需要第二个secret:集群的kubeconfig。这个文件授予可以对它的管理访问权限。因此,我们不希望将文件签入存储库。

 

创建一个名为do-k8s的secret并且将kubeconfig文件上传到/home/semaphore/.kube/dok8s.yaml中:
 
Step by Step!  Kubernetes continued Deployment Guide
 

部署清单

 

尽管Kubernetes已经是容器编排平台,但是我们不直接管理容器。实际上,部署的最小单元是pod。一个pod就好像一群形影不离的朋友,总是一起去同一个地方。因此要保证在pod中的容器运行在同一个节点上并且有相同的IP。它们可以同步启动和停止,并且由于它们在同一台机器上运行,因此它们可以共享资源。

 

pod的问题在于它们可以随时启动和停止,我们没办法确定它们会被分配到的pod IP。要把用户的http流量转发,还需要提供一个公共IP和一个负载均衡器,它负责跟踪pod和转发客户端的流量。

 

打开位于deploymente.yml的文件。这是一个部署我们应用程序的清单,它被3个dash分离成两个资源。第一个,部署资源:
 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: semaphore-demo-ruby-kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: semaphore-demo-ruby-kubernetes
  template:
    metadata:
      labels:
        app: semaphore-demo-ruby-kubernetes
    spec:
      containers:
        - name: semaphore-demo-ruby-kubernetes
          image: $DOCKER_USERNAME/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID

 
这里有几个概念需要厘清:

 

  • 资源都有一个名称和几个标签,以便组织

  • Spec定义了最终期望的状态,template是用于创建Pod的模型。

  • Replica设置要创建的pod的副本数。我们经常将其设置为集群中的节点数。既然我们使用了3个节点,我将这一命令行更改为replicas:3

 
第二个资源是服务。它绑定到端口80并且将HTTP流量转发到部署中的pod:
 


---

apiVersion: v1
kind: Service
metadata:
  name: semaphore-demo-ruby-kubernetes-lb
spec:
  selector:
    app: semaphore-demo-ruby-kubernetes
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 4567

 
Kubernetes将selector与标签相匹配以便将服务与pod连接起来。因此,我们在同一个集群中有许多服务和部署并且根据需要连接他们。
 

部署流水线

 

我们现在进入CI/CD配置的最后一个阶段。这时,我们有一个定义在semaphore.yml的CI流水线,以及定义在docker-build.yml的Docker流水线。在这一步中,我们将部署到Kubernetes。

 

打开位于.semaphore/deploy-k8s.yml的部署流水线:
 

version: v1.0
name: Deploy to Kubernetes
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

 
The final assembly line consisting of two job:
 
Step by Step!  Kubernetes continued Deployment Guide
 
the Job 1 start deployment. After importing files kubeconfig, envsubst deployment.yaml in the placeholder variables are replaced by their actual values. Then, kubectl apply to send the list to the cluster.
 


blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: do-k8s
        - name: dockerhub

      env_vars:
        - name: KUBECONFIG
          value: /home/semaphore/.kube/dok8s.yaml

      jobs:
      - name: Deploy
        commands:
          - checkout
          - kubectl get nodes
          - kubectl get pods
          - envsubst < deployment.yml | tee deployment.yml
          - kubectl apply -f deployment.yml

 
Job 2 will mirror mark-to-date, so that we will be able to run the next time it is used as a cache.
 

- name: Tag latest release
  task:
    secrets:
      - name: dockerhub
    jobs:
    - name: docker tag latest
      commands:
        - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
        - docker pull "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID
        - docker tag "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest
        - docker push "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest

 
This is the last step in the workflow.
 

Deploying the application

 

Let us teach our Sinatra application sing. Add the following code in app.rb App class:
 

get "/sing" do
  "And now, the end is near
   And so I face the final curtain..."
end

 
Push modified files to Github:
 

$ git add .semaphore/*
$ git add deployment.yml
$ git add app.rb
$ git commit -m "test deployment”
$ git push origin master

 
Construction of the pipeline is completed until the docker, you can check the progress of Semaphore:
 
Step by Step!  Kubernetes continued Deployment Guide
 
it is time for deployment, click the Promote button to see if it works:
 
Step by Step!  Kubernetes continued Deployment Guide
 
we've got a good start, and now to see Kubernetes of the. We can use kubectl check the deployment state, the initial state is required to pod and three zeros are available:
 

$ kubectl get deployments
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
semaphore-demo-ruby-kubernetes   3         0         0            0           15m

 
After a few seconds, pod has started, reconciliation has been completed:
 

$ kubectl get deployments
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
semaphore-demo-ruby-kubernetes   3         3         3            3           15m

 
Use get all get the general status of the cluster, which shows the pod, service deployment, and replica:
 


$ kubectl get all
NAME                                                  READY   STATUS    RESTARTS   AGE
pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-454dh   1/1     Running   0          2m
pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-4pdqp   1/1     Running   0          119s
pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-9wsgk   1/1     Running   0          2m34s

NAME                                        TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
service/kubernetes                          ClusterIP      10.12.0.1              443/TCP        24m
service/semaphore-demo-ruby-kubernetes-lb   LoadBalancer   10.12.15.50   35.232.70.45   80:31354/TCP   17m

NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/semaphore-demo-ruby-kubernetes   3         3         3            3           17m

NAME                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/semaphore-demo-ruby-kubernetes-7d985f8b7c   3         3         3       2m3

 
Service IP show after the pod. For me, the load balancer is assigned to the external IP 35.232.70.45. You need to change your provider that was assigned to you, and then we try the new server.
 

$ curl -w "\n" http://YOUR_EXTERNAL_IP/sing

 
Now, from the end is not far off.
 

Victory close at hand

 

When you are using the correct CI / CD solutions after deployment to Kubernetes not so difficult. You now have a Kubernetes continuous delivery of fully automated assembly line friends.

 

Here are a few suggestions that lets you freely fork and Fun semaphore-demo-ruby-kubernetes on Kubernetes:

 

  • Create a staging cluster

  • Construction of a deployment vessel and run the test on the inside

  • Using more micro service extension project

Guess you like

Origin blog.51cto.com/12462495/2435396