k8s achieve gray release

  Gray-scale publishing in the actual production deployment is often used, the conventional method is to manually remove from the front LB (load balancing) on the back-end server, and then stop the service, and finally upload the code to complete the software update connection. When using the CI / CD tool, this process becomes automated, we just need continuous integration and deployment tools by Jenkins this powerful open source, it can be combined Gitlab or Gogs to automatically pull the code, and I have written according to pipeline script, automatically connect to remove the back-end Server on LB, and automatically connect to the back-end Server, upload code and restart the service, to inform the final report on the results of the process the administrator via e-mail. But today, K8s's made us see a more convenient and efficient implementation of the grayscale release, the following is said:

Because of limited capacity, is still shallow understanding of k8s, in principle part of the follow-up study after a thorough, in described in detail, this title is for beginners

first need to create the base image for this experiment:
writing 1. dockerfile of:
  mkdir dockerfile && cd dockerfile
  vim dockerfile # Note: the file name of the first letter capitalized dockerfile
    the FROM Alpine: Latest

    MAINTAINER "ZCF <[email protected]>"

    ENV NGX_DOC_ROOT = "/ var / lib / Nginx / HTML" HOSTNAME = "" the IP = "" PORT = "" index_page = ""
    the RUN APK --no the Add-Cache && Nginx


    Entrypoint.sh COPY / bin

    the CMD [ "/ usr / sbin / nginx", "- G", "OFF daemon;"] # is defined as a front end to start service start nginx, -g: a global segment, in the modified OFF daemon;
    EntryPoint [ "/bin/entrypoint.sh"] # the CMD command, passed to /bin/entrypoint.sh script as an argument.


    # prepared Dockerfile foundation supporting documents:
    1) start container, execution of the script file: entrypoint.sh
      entrypoint.sh vim
        ! # / bin / SH

        echo "<h1> WELCOME the TO $ {hOSTNAME: - www.zcf.com}  the WEB SITE |` hostname -i` | | `date` |` hostname` - $ {YOU_INFO: } -V1 - | </ h1 of> "> $ {} /index.html NGX_DOC_ROOT
        CAT> << /etc/nginx/conf.d/default.conf the EOF
        Server {
          server_name $ {HOSTNAME: - www.zcf.com} ;
          the listen the IP $ {:} -0.0.0.0: $ {PORT:-80};
          {} $ NGX_DOC_ROOT the root;
          LOCATION / {
            index} $ {index_page index.html index.htm;
          }
          LOCATION /404.html = {
            Internal;
          }
        }
        the EOF

        Exec "$ @" # CMD it is to accept incoming parameters .

  2) to entrypoint.sh add execute permissions
    chown the X-entrypoint.sh +

  3) when the latter do health checks, html files used:
    echo the OK> chk.html

2. start making the image file docker:
  docker Build --tag myapp: ./ v1

3. will produce a good image file, to tag and upload it to the harbor.
  docker login harbor.zcf.com -u admin -p 123456 # Login harbor
  Docker Tag myapp: v1 harbor.zcf.com/k8s/myapp:v1 # hit the harbor warehouses path
  docker push harbor.zcf.com/k8s/myapp:v1 # upload the image to the harbor.

4. In order to facilitate the release of the delayed effect of the recovery, we also need to make a mirror image
  Docker RUN --name ngx1 -d -e YOU_INFO = "DIY-the HelloWorld-v2" harbor.zcf.com/k8s/myapp:v1
    # Description : -e is specified environment variable to be passed to the container, because I started early this YOU_INFO environment variables used in the script entrypoint.sh myapp, the
    # So, here I can pass this variable directly to the container, to achieve modify nginx Home effect.

  Docker the commit --pause ngx1 # ngx1 will be suspended, and the current state of the container, exporting to a new image.

  docker kill ngx1 && docker rm -fv ngx1 # mirrored finished, remove the test ngx1 container directly.

  the root-N1 K8S @: ~ # Docker Images
    the REPOSITORY the TAG ID CREATED the IMAGE SIZE
    <none> <none> 85355d4af36c . 6 seconds The 7.02MB ago Member    # this will make the new image just.

  # to just making a good image to tag: harbor.zcf.com/k8s/myapp: v2, easy to upload to the harbor.
  docker tag85355d4af36c harbor.zcf.com/k8s/myapp:v2

  # mirror test run, if there is no problem, you can upload the local harbor.

  RUN --rm -d -p 83:80 Docker --name ngx1 harbor.zcf.com/k8s/myapp:v2

  root @ K8S-N1: ~ # curl http://192.168.111.80:83/    # test whether mirror nginx modify the home page for the content of YOU_INFO.
  <h1> WERCOME the tO www.zcf.com the WEB SITE | Fri Jul 19 02:31:13 UTC 2019 | ec4f08f831de | 172.17.0.2 | -DIY-the HelloWorld-V2- | </ h1>

  Docker delete the kill ngx1 # ngx1 container.

  Docker the Push harbor.zcf.com/k8s/myapp:v2 # Finally, upload a new image to the Harbor.

5. now have, myapp: v1 and myapp: v2 it can gray start K8s release testing.

  # Create three pod, a Client, two Nginx
  # 1 to create Client.
    Kubectl RUN Client --image = harbor.zcf.com / K8S / Alpine: v1 --replicas = 1
    # Note: alpine: is a minimum of the Linux system, many open-source image can be downloaded to the station.
    kubectl GET PODS # -o Wide View details created the Pod.

  # 2 create Nginx.
  kubectl RUN = harbor.zcf.com --image nginx / K8S / myapp: v1 = 80 = 2 --replicas --port

  kubectl GET Deployment -w #watch the monitoring k8s help us create two pod process.

  kubectl GET pod -o Wide

  # 3 Login Client, test access Nginx.
  root @ k8s-M1: / etc / ansible GET # kubectl POD
    NAME RESTARTS of AGE the STATUS READY
    Client-f5cdb799f 2wsmr-2 for 16 h 1/1 Running
    Nginx 6d6d8b685-7t7xj-99m 1/1 0 Running
    Nginx 6d6d8b685-99m-xpx5r 1/1 Running 0

  -it-f5cdb799f Client Exec kubectl-2wsmr SH
  / ip addr #
  / # for i in `seq 1000`; do wget -O - -q HTTP: // nginx / ; SLEEP 1; DONE
  / # # Note: If your kube-dns not deployed successfully, nginx here can be replaced with IP Service of.
  / # # kubectl GET svc |. # this is Nginx nginx grep cluster IP Service is

  # 4 above tests can be seen, we have been able to achieve load balancing effects.
    Then, to begin publishing the gray test
    image # update myapp is myapp: v2
    kubectl the SET Image --help
    kubectl the SET Image Deployment myapp myapp = harbor.zcf.com / K8S / myapp: v2 # upgrades myapp mirroring is myapp: v2

  # the above command is executed, you can look at another terminal in the Client access changes, you can find access to becoming DIY-HelloWorld-v2 from v1.

  # 5 Test Pod dynamically adjusting the number of nginx
    kubectl scale --replicas = 5 deployment nginx # Pod modify the number of copies to 5 nginx.
    Kubectl GET PODS

  # Then on to the Client terminal is located, to see the changes, you will find that the host names and IP section began to have more changed.

  # 6. Nginx view mirror upgrade status, success
    kubectl rollOut Deployment Status nginx

  # 7. Myapp then see whether the image has been upgraded to the latest
    kubectl describe pods nginx-xxx-xx

  previous # 8 version will be rolled back to myapp that v1 version
    kubectl rollOut Use the undo --help
    kubectl rollOut Use the undo Deployment nginx

6. test K8s outside the cluster to access nginx
  # myapp service type of modification, it can be a cluster of external client access.
    kubectl Edit svc myapp
      #type: ClusterIP the it amended as of the type: NodePort

  # view the updated information of svc:
    kubectl GET svc # here you can see, myap service port will add a dynamic such as:. 80: 30020 / TCP, Note: 30020 is assigned randomly.
             # Of its range, when you use kubeasz deployment, set in NODE_PORT_RANGE = "30000-60000" randomly selected.

  # Then they can go visit myapp outside of the cluster client
    http: // or physical IP Master Node's: 30020 /

# Well, these test results, I do not screenshots, and want to see the results of the Road faithful to a lot of hands-on testing, then summed up a lot, a lot of thinking, we can see to and understand.



Guess you like

Origin www.cnblogs.com/wn1m/p/11287879.html