k8s cloud cluster model mix and match landing Share

  In the " K8S cloud cluster mashup model, may help you save more than 50% of the cost of services ," an article, introduces manner using k8s + virtual nodes mixed cluster, a load having a period peak, service cost troughs alternately law, improve telescopic efficiency of service deployment. Specific landing steps in this article to the program and to share the basic operation and maintenance, provide a reference for those who have this need.

Deployment Requirements

  1. We should be able to provide service 24 hours a day
  2. Before the arrival of the peak of the business can automatically expand a specified number of containers (in advance of the traffic load assessed and pressure measurement, determine the number of containers)
  3. After the peak of the business can automatically shrink to a specified number of containers
  4. The elastically stretchable service can automatically load, to avoid the case where the service capacity to keep traffic burst

 

 


basic concept

   Docker

  • Container: in the form of applications running

  • Mirror: definition of the container, or packaged in the form of

  • Container mirror service: Mirror warehouse

   

   k8s hybrid cloud cluster

  • Cluster - hosted version, a proprietary version, Serverless Edition

  • 节点 - Master Node, Worker Node

  • Namespace (Namespace) 

  • Under

  • A copy of the controller (Replication Controller)

  • Replica set (Replica Set)

  • Deployment (Deployment)

  • Service (Service)

  • Label (Labels)

  • Storage volumes (Volume) - PV, PVC

  • Ingress

 

 Mirroring ready

    Dockerfile defined herein as related to support distributed session (see [ redission-Tomcat: Rapid deployment achieved from single to multiple machine deployment ]), so the addition of a number corresponding to the alternative configuration dependent jar, according to their actual situation Dockerfile prepared.

Tomcat the FROM: 8.5.43-jdk8-openjdk 
# remove unwanted files or to be replaced, when modifying the container area for the Shanghai time zone
RUN RM -rf / usr / local / Tomcat / webapps / * && \
RM -f / usr / local / Tomcat && /conf/context.xml \
cp / usr / report this content share / zoneinfo / Asia / on Shanghai / etc / localtime
# copy the configuration file
cOPY ./target/classes/redisson-tomcat/* / usr / local / Tomcat / conf /
# copy We need to rely on the jar package
COPY ./dockerimage-depends/*.jar / usr / local / Tomcat / lib /
# replace time zone catalina.sh solve the problem
COPY ./dockerimage-depends/catalina.sh / usr / local / tomcat / bin /
# copy deploy war package
cOPY ./target/biz-server.war / usr / local / tomcat / webapps / EXPOSE 8080
running tomcat # when activated, to override the default startup instructions
# use catalina.sh run the log catalina.out Print insufficient use startup.sh because it is running in the background, the Executive withdrew, docker container dropped out, so add a tail -F to keep them running in the foreground
#CMD ["catalina.sh", "run"]
CMD /usr/local/tomcat/bin/startup.sh && tail -F /usr/local/tomcat/logs/catalina.out

 

Creating a cluster (hosted version)

Reference: https: //help.aliyun.com/document_detail/85903.html


Proprietary networks the VPC : between the same nodes in a private network, may be interconnected between Pod

Virtual switch : more choices available virtual switch several different areas

SNAT : If the VPC does not have the ability to access the public network, select Configure SNAT NAT gateway will create and automatically configure SNAT rule. If you use a cloud database need to configure IP whitelist, or micro-channel public number configured IP whitelist, the service requires the public network IP, some ECS node has a public IP will be used in preference to the nodes or virtual nodes no public IP network, you need to configure SNAT get public IP.

 

Add Nodes

Reference: https: //help.aliyun.com/document_detail/86919.html

ECS can only add nodes in the same geographical

Automatically add : will replace the system tray, the original system disk will be released (with caution!)

Manually add : the need to perform the specified command to install the necessary software is dependent on the ECS

 

The following figure shows the three ECS added node and a virtual node (virtual-kubelet) in the cluster

 

Adding Virtual Nodes

Reference: https: //help.aliyun.com/document_detail/118970.html

Scheduling the virtual switch only when specified, you can only use the available resources specified area where the virtual switch (e.g., virtual switch region G in Hangzhou, the scheduler Pod on a virtual node is added to the virtual node service resource region G can not be scheduled to resource region or other regions H)

Virtual node configuration information can be updated (e.g., replacement of a virtual switch)

 

Virtual node is accomplished by adding ack-virtual-node application directory

 

Create an application (Deployment)

 Image creation: Reference https://help.aliyun.com/document_detail/90406.html

According to the template creation (yaml template): Reference https://help.aliyun.com/document_detail/86512.html


As shown below is created in the business service deployment requirements, and two visual services Service

 

Load Balancing

There are three ways to achieve a certain level load balancing cluster service

  • Virtual cluster IP: cluster on Pod or Node accessible outside the cluster can not access

  • Network load balancing: within the distribution network IP, VPC can be accessed, not necessarily in the cluster

  • External network load balancing: assign external network IP, externally accessible

For business services, need external network access, so create external network load balancing, the domain name resolves to the external network IP; for two-visual services, only need to provide access in a cluster, you can use the virtual cluster IP (higher than network load balancing efficiency )

 

Storage Management

 Currently only supports virtual nodes to mount emptyDir (provisional), NFS (NAS), ConfigFile

 

NAS Reference: https: //help.aliyun.com/document_detail/27518.html

NAS can be mounted to the ECS, ECS to access via ssh

 

The figure below shows how the NAS directory is mounted to a directory under the container

Manual telescopic

Pod Deployment performed manually or zoom scale

 

 Automatic retractable

 Automatic telescopic The CPU and memory load

Since generally the detected load exceeds a threshold value and takes time to start the tank, the delay may affect the services, it may be provided on the one hand to the low load threshold point, on the other hand if a higher regularity, using the timing stretch.

 

 Timing telescopic

Achieved by cronhpa-controller

Reference: https: //github.com/AliyunContainerService/kubernetes-cronhpa-controller

 

Create a scalable application timing (note that the default is GMT time required minus 8 hours configuration) template

 

There is no offer console management, updating instruction reference:

#查看
kubectl describe cronhpa cronhpa-herpes-slave  
#编辑
kubectl edit cronhpa/cronhpa-herpes-slave -n default
#删除
kubectl delete cronhpa/cronhpa-herpes-slave -n default    

 

Rapid Construction

First you need to create a trigger on the deployment redeployment, created after will generate a url, just get the url request can trigger the re-deployment pull the mirror to complete the deployment.

 

 

 


With jenkins, achieve rapid construction services.

 

 

 


Reference deployment script

#! / bin / the bash 

work_dir = / var / lib / Jenkins / Workspace / $. 1
depends_dir = / Home / Jenkins / dockerimage-The depends /
# additional dependent jar package to the build context docker, the convenience to the mirror in the under tomcat directory
cp -R $ depends_dir $ work_dir
# play mirrored locally
cd $ work_dir
Docker Build -t BIZ-Server:. Latest
# mirrored push to Ali cloud images warehouses
sudo docker tag biz-server: latest registry.cn- hangzhou.aliyuncs.com/biz/biz-server:latest
sudo Docker registry.cn-hangzhou.aliyuncs.com/biz/biz-server:latest the Push
# to complete the on-line through the trigger to fire the redeployment
curl HTTPS: // cs.console .aliyun.com / hook / trigger? token = xxxxxxxxxxxxxxxx

 

Container access

1. Get the name of the container (see below)

 

 

 

2. Run the following command on the ECS configuration Kubernetes have access to enter the container

kubectl exec -it herpes-master-6447d58c4b-cqznf bash

 

 

 

data access

1. ssh NAS connected to the mount plate cloud ECS

2. Go to the corresponding mount catalog, there are business services log, pictures and visual services log

 

 

 

Deployment Architecture

The final application of the respective components built in line with the beginning of the deployment requirements of the cluster structure is as follows

 

 

 

 

to sum up

This deployment scenario for people who do not have a certain container is based threshold is relatively high, and the service may launch time is not long, not too perfect aspects of the document, the author has also stepped in practice a lot of pits, is currently serving a cluster stable , cost more than 2/3, and the telescoping very convenient. If you happen to have such a business scenario and needs to be concerned about the public number "empty mountains Xinyu technology space" exchange. This article also ppt version, if necessary, can be sent "k8s" in the number of public home page for download mode.

 

 

Recommended Reading

k8s cloud cluster mix and match patterns, may help you save more than 50% of the cost of services self-cultivation IT technical staff of
self-cultivation technology IT managers

 

Author: empty mountains Xinyu

Welcome attention, together with the actual exchange drip technology in the field of enterprise IT

 

Guess you like

Origin www.cnblogs.com/spec-dog/p/11462914.html