Based drone build CI-CD system

kubernetes cluster three-step installation

CI Overview

May be described using a configuration defines the overall workflow

Programmers are lazy animals, so I want all kinds of solutions to the problem of duplication of effort, if your workflow also repeat something, you might have to think about how to optimize the

Continuous integration is that you can help us resolve duplicate code to build, test automation, publishing, duplication of effort, simply by submitting a code action to solve a lot of things to do next.

Container technology makes it so much more perfect.

A typical scenario:

We wrote a front-end engineering, the assumption is based on the framework vue.js development, hoping to run a race after the submission of test code, and then build a compressed to dist directory, then this directory static files nginx proxy with it.
Finally, tied docker mirror image into the warehouse. You can even increase the flow up and running on a line.

Now tell you, just a git push action, then everything CI tool will help you solve! Such a system, if you have not spend, then ask what are you waiting. The next will be the system to tell you about all this.

Code Warehouse Management

First of all this slag slag SVN software as soon as possible out of respect, nothing to say, there really is no need to git me feel the presence of SVN.

So we choose a git repository, more git repository, here I choose gogs, gitea gitlab will do, choose according to demand

docker run -d --name gogs-time -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --publish 8022:22 \
           --publish 3000:3000 --volume /data/gogs:/data gogs:latest

Access to port 3000, then there is no then the
gogs function not so powerful, but small footprint, fast speed, we have stable operation for a few years. Full API drawback is not enough.

CI tool

When you used the drone. . .

Packing:

version: '2'

services:
  drone-server:
    image: drone/drone:0.7
    ports:
      - 80:8000
    volumes:
      - /var/lib/drone:/var/lib/drone/
    restart: always
    environment:
      - DRONE_OPEN=true
      - DOCKER_API_VERSION=1.24
      - DRONE_HOST=10.1.86.206
      - DRONE_GOGS=true
      - DRONE_GOGS_URL=http://10.1.86.207:3000/   # 代码仓库地址
      - DRONE_SECRET=ok

  drone-agent:
    image: drone/drone:0.7
    command: agent
    restart: always
    depends_on:
      - drone-server
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - DOCKER_API_VERSION=1.24
      - DRONE_SERVER=ws://drone-server:8000/ws/broker
      - DRONE_SECRET=ok

docker-compose up -d Then you know, there is no then the

Login drone can be used gogs account

Each step is a container, each plug-in is also a container, in various combinations, simply typography

How to use this primary superficial content I do not go into details, but there are many places in the pit:

  • The machine can be installed drone try to use aufs, device mapper some plug-in is not run, as some docker in docker plug-ins, this is not a drone problems, can only say to docker docker in docker support is not good enough
  • centos for aufs support is not good enough, if you want to use centos support aufs, you may have to toss toss, and community programs here: https://github.com/sealyun/kernel-ml-aufs
  • The most recommended is the drone of the machine kernel upgrade to 4.9, then docker use overlay2 storage drive, high version of the kernel to run container I also practiced a long time, much more stable than the low kernel

Installation 2 at the installation k8s :

helm install stable/drone

Use articles

First create three files in your home directory code repository:

  • .drone.yml: Description build and deployment process (narrowly), process configuration file (generalized) CI / CD no essential difference
  • Dockerfile: how to tell your applications packaged into a mirror, of course, if not containerized delivery may not be required
  • k8s yaml profile or docker-compose documents or chart package: how to tell your application deployment
  • Other: The other kube-config

Under the project, you can see synchronization with gogs account password to log on to the drone after page list of items, turn on the switch can be linked to git repository, relatively simple, self-exploration

Official Case

pipeline:
  backend:    # 一个步骤的名称,可以随便全名
    image: golang  # 每个步骤的本质都是基于这个镜像去启动一个容器
    commands:      # 在这个容器中执行一些命令
      - go get
      - go build
      - go test

  frontend:
    image: node:6
    commands:
      - npm install
      - npm test

  publish:
    image: plugins/docker
    repo: octocat/hello-world
    tags: [ 1, 1.1, latest ]
    registry: index.docker.io

  notify:
    image: plugins/slack
    channel: developers
    username: drone

Each start of a shared workdir this volume container, such build steps resulting product can be used in this vessel publish

Combined Dockerfile see:

# docker build --rm -t drone/drone .

FROM drone/ca-certs
EXPOSE 8000 9000 80 443

ENV DATABASE_DRIVER=sqlite3
ENV DATABASE_CONFIG=/var/lib/drone/drone.sqlite
ENV GODEBUG=netdns=go
ENV XDG_CACHE_HOME /var/lib/drone

ADD release/drone-server /bin/   #  因为工作目录共享,所以就可以在publish时使用到 build时的产物,这样构建和发布就可以分离

ENTRYPOINT ["/bin/drone-server"]

Mentioned above, build and release separated, when useful, such as building codes golang environment we need to go, but in fact, only one executable file can be run online or,

So Dockerfile where you can not have a golang FROM base image, make your image smaller. Another example is the need java maven build, and do not need to run online,

It also can be separated.

When the drone to play with the imagination, do not use dead, the above need to be carefully read every word again, careful to understand. And then summarize the key points:

drone itself no matter what the function of each step, you only fool help from the container, normally finish on the implementation of the next step, failed to terminate.

Compilation, mirroring function submitted to the drone warehouse, deployment, notifications are made mirror function, the function of the vessel's decision called plug-ins, plug-in is essentially a mirror, there is a small difference, said Diudiu behind

This means that you get what you want doing the mirror, such as the need maven compile time, then do a maven mirroring, you need the docking k8s when deployed, then there kubectl put forward a client's image; deploy to a physical machine then put forward a
ansible mirror image , etc., imagination, flexibility.

drone environment variables

Sometimes we want a consistent CI out of the docker Mirror git tag with the tag, so that the benefits of running is to know which version of the code, upgrades, and so easy, but every time to modify the pipeline
file apparently bored, then drone on there can be many environmental variables to help us solve this problem:

pipeline:
    build:
        image: golang:1.9.2 
        commands:
            - go build -o test --ldflags '-linkmode external -extldflags "-static"'
        when:
            event: [push, tag, deployment]
    publish:
        image: plugins/docker
        repo: fanux/test
        tags: ${DRONE_TAG=latest}
        dockerfile: Dockerfile
        insecure: true
        when:
            event: [push, tag, deployment]

This example ${DRONE_TAG=latest}if git tag event triggers the pipeline would be the git tag as a mirror tag, otherwise use latest, so that our own development process will
can always use latest iteration, version think about it, make a tag, you can generate a given test to test the mirror, very elegant, you do not need to change anything, not prone to error

Similarly there are many other environment variables can be used, such as the commitID git branch information, etc., where you can check

Docking k8s practice

Start with a k8s clusters, then the choice: Kubernetes cluster three-step installation advertising, ignoring the like. . .

With the above bedding, docking k8s quite simple: put forward kubectl mirror can be embedded in the process:

The project k8s yaml files into code, and then directly apply in pipelie

  publish:
    image: plugins/docker   # 镜像仓库,执行Dockerfile插件
    tags:
      - ${DRONE_TAG=latest}
    insecure: true  # 照抄
  
  deploy:
     image: kubectl:test  # 这个镜像自己去打即可
     commands:
          - cat test.yaml
          - ls   
          - rm -rf /root/.kube && cp -r .kube /root # k8s 的kubeconfig文件,可以有多个,部署到哪个集群就拷贝哪个kubeconfig文件
          - kubectl delete -f test.yaml || true
          - kubectl apply -f test.yaml 

But there are a few details best practices:

  • k8s of kubeconfig files on the same code directory (this is not safe, but private warehouse Fortunately, you can also make use of secret drone, the fine is not expanded)
  • yaml file k8s deployed in the mirror how configuration? Every modification test.yaml and more accurate
  • If multiple clusters yaml configured differently how to do? Write Multiple yaml? Confusion, very elegant

So we introduce the chart, deployed with helm:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test
  namespace: {{ .Values.namespace }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        name: test
    spec:
      serviceAccountName: test
      containers:
      - name: test
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"  # deployment的yaml文件是模板,创建时再传参进来渲染
        imagePullPolicy: {{ .Values.image.pullPolicy }} 
....

Note Once you have the template, when we deploy versions v1 and v2 versions do not need to change yaml file, which reduces the risk of error, while the pipeline execution environment variables passed in, the perfect solution

So git tag image tag with yaml in a mirrored configuration to achieve the complete reunification:

    deploy_dev:   # 部署到开发环境
        image: helm:v2.8.1
        commands:
            - mkdir -p /root/.kube && cp -r .kube/config-test101.194 /root/.kube 
            - helm delete test --purge || true
            - helm install --name test --set image.tag=${DRONE_TAG=latest}  Chart
        when:
            event: deployment
            environment: deploy_dev

    deploy_test:  # 部署到测试环境
        image: helm:v2.8.1
        commands:
            - mkdir -p /root/.kube && cp -r .kube/config-test101.84 /root/.kube  # 两个环境使用不同的kubeconfig
            - helm delete test --purge || true
            - helm install --name test --set image.tag=${DRONE_TAG=latest}  Chart # 把git tag传给helm,这样运行的镜像就是publish时构建的镜像,tag一致
        when:
            event: deployment
            environment: deploy_test

More elegant solution to the above problem

Details: event may be git events can also be manually events punishment, when the deployment is to manually type is triggered, drone support command line trigger

We conducted a secondary development, so that drone can trigger a corresponding event on the page

Principle articles

When opening a warehouse on the drone, will set up a warehouse webhook, you can see in the project settings, so you can git event notification to the drone, drone went to go pull the code processes based on events

The basic principle of pipeline

Principle of understanding is very important to use the system, otherwise it will put a thing with death.

pipeline from the container is responsible for it, the system does not care about doing the container, the user decides the sentence article emphasized more than once, it is important to read several times

Plug principle

That plug-in mirror, which is likely to mirror the existing lot can be directly embedded into the drone as a plug-in process.

There is a small difference is that you will find the drone Some plug-ins also with a number of parameters, which is to do more things than the average Diudiu a mirror image, such as playing time publish docker image:

  publish:
    image: plugins/docker
    repo: octocat/hello-world
    tags: [ 1, 1.1, latest ]
    registry: index.docker.io

You will find it repo tags what parameters, is actually very simple when drone treatment, these parameters is to be converted into environment variables passed to the container,
and the container to deal with these parameters.
Nature is to do this thing:

docker run --rm \
  -e PLUGIN_TAG=latest \
  -e PLUGIN_REPO=octocat/hello-world \
  -e DRONE_COMMIT_SHA=d8dbe4d94f15fe89232e0402c6e8a0ddf21af3ab \
  -v $(pwd):$(pwd) \
  -w $(pwd) \
  --privileged \
  plugins/docker --dry-run

Then we customize a plug-in is simple, just write a script to handle specific environment variables can, such as a curl of plug-ins:

pipeline:
  webhook:
    image: foo/webhook
    url: http://foo.com
    method: post
    body: |
      hello world

Write a script

#!/bin/sh

curl \
  -X ${PLUGIN_METHOD} \  # 处理一个几个环境变量
  -d ${PLUGIN_BODY} \
  ${PLUGIN_URL}
FROM alpine
ADD script.sh /bin/
RUN chmod +x /bin/script.sh
RUN apk -Uuv add curl ca-certificates
ENTRYPOINT /bin/script.sh
docker build -t foo/webhook .
docker push foo/webhook

Labeled docker mirror, it's done

So in most cases we will write nothing lazy, execute commands directly in the container it wants, is also a curl needs, then do not write plug-ins

pipeline:
  webhook:
    image: busybox  # 直接用busybox
    command: 
    - curl -X POST -d 123 http://foo.com  完事,插件都懒得开发了

It is noteworthy that some of the complex functions still need to develop plug-ins, such as the use of plug-ins when publish mirror. About the plug-in I would like to add
that it is played inside a docker docker engine, were playing in the mirror with a docker docker engine
so devicemapper storage drive is not supported. Please upgrade your kernel with overlay2, or ubuntu with aufs

Other recommended

  • Mirror Warehouse: Harbor
  • Products Gallery: Nexus do maven repository, yum repository put binary files very appropriate, it is strongly recommended

to sum up

To achieve efficient automation, everything as code is very important, many people like in the interface little point to fill out many parameters on the line, in fact, it is a very error-prone way
does not necessarily improve efficiency. How do you build the project, how to publish, how to deploy the code should be no ambiguity, the people do let the program do, eventually people just trigger it.

Scan code concern sealyun

Personal opinion, to explore can add QQ group: 98,488,045

Guess you like

Origin www.cnblogs.com/sealyun/p/11265823.html