K8S installation and deployment

Four ways to install k8s in China:
1. Use kubeadmin to install through offline mirroring
2. Use Alibaba public cloud platform k8s to increase capabilities
3. Install through the yum official repository, ancient version
4. Install as a binary package, kubeasz (github)

Installation steps:
1. Environment configuration:
1. Set the host name and time zone
timedatectl set- timezone Asia/Shanghai #Both execute
hostnamectl set-hostname master #132 execute
hostnamectl set-hostname node1 #133 execute
hostnamectl set-hostname node2 #137 execution
2. Add hosts network host configuration, three virtual machines must be set up to facilitate finding the host
vim /etc/ hosts
192.168.26.70 master
192.168.26.73 node1
192.168.26.77 node2
3 . Turn off the firewall and set up all three virtual machines. This step is to prevent various network problems caused by the firewall during the learning phase. Skip this step in the production environment.
sed -i & #39;s/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewall

2. Install kubeadmin
1. Upload the image package to each node of the server
mkdir /usr/local/k8s-install a> cd kube114-rpm net.bridge.bridge-nf-call-iptables = 1< /span> docker load -i flannel-dashboard.tar.gz docker load -i k8s-114 -images.tar.gz cd /usr/local/k8s-install/kubernetes-1.14 7. Install k8s through the image and load the k8s image into docker to facilitate subsequent deployment. (There is no k8s image available for free in the docker warehouse) sysctl --system EOF net.bridge.bridge-nf-call-ip6tables = 1 cat <<EOF > /etc/sysctl. d/k8s.conf 6. Configure the bridge #/dev/mapper/centos-swap swap swap defaults 0 0 #swap one line comment vi /etc/fstab swapoff -a 5. Close the swap area (can be understood as virtual memory, try not to use the swap area when using k8s to prevent unexpected Question) yum localinstall -y *.rpm tar -zxvf kube114-rpm.tar.gz cd /usr/local/k8s-install/kubernetes-1.14 # kubeadm is a cluster deployment tool 4. Install kubeadm systemctl daemon-reload && systemctl restart docker EOF } "exec-opts": ["native.cgroupdriver=cgroupfs"] { cat << EOF > /etc/docker/daemon.json If it is not groupfs, execute the following statement docker info | grep cgroup #In cgroup, the divided task groups are organized in a hierarchical structure, and multiple subsystems form a data structure similar to a multi-rooted tree. cgroup contains multiple isolated subsystems, each subsystem represents a single resource #Subsystem is a group that divides tasks into a group according to a specified attribute based on cgroup's task division function. It is mainly used to realize resource control. #cgroups is the underlying foundation for realizing the resource management and control part of IaaS virtualization (kvm, lxc, etc.) and PaaS container sandbox (Docker, etc.). #cgroups is the abbreviation of control groups. It provides a mechanism for task aggregation and division for the Linux kernel. It organizes some tasks into one or more subsystems through a set of parameters. . 3. Make sure that all slave cgroups are in the same A slave groupfs systemctl enable docker systemctl start docker yum localinstall -y *.rpm cd docker tar -zxvf docker-ce-18.09.tar.gz 2. Install Docker on each Centos (installation package mode Install docker, you can install docker in other ways) XFTP upload installation file
cd /usr/local/k8s-install








































3. Deploy k8s cluster through kubeadmin
1. master server configuration
kubeadm init –kubernetes-version=v1.14.1 –pod-network -cidr=10.244.0.0/16
After executing the above command, you will be prompted for the next command to be executed: as follows:
mkdir -p $HOME/ .kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$( id -g) $HOME/.kube/config
kubectl get nodes
#View problematic pods
kubectl get pod –all-namespaces
#Set global variables
#Install the flannel network component responsible for communication between multiple pods
kubectl create -f kube-flannel.yml
2. Join the NODE node
After executing the kubeadm init command on the master host, you will be prompted how to join the worker node, as follows: < /span> 3. Master opens the dashboard (just execute it on the master node) kubectl get nodes kubeadm join 192.168.163.132:6443 –token aoeout.9k0ybvrfy09q1jf6 –discovery -token-unsafe-skip-ca-verification execute kubeadm token list on the master to view it, and run on the node If you forget the token –discovery-token-ca-cert-hash sha256:23db3094dc9ae1335b25692717c40e24b1 041975f6a43da9f43568f8d0dbac72
kubeadm join 192.168.4.130:6443 –token 911xit.xkp2gfxbvf5wuqz7 \ kubectl apply -f admin-role.yaml kubectl apply -f kubernetes-dashboard-admin. rbac.yaml kubectl -n kube-system get svc http://192.168.163.132:32000 access 4. Deploy tomcat through the interface, as shown in the figure. After filling in the information in the form, click Deploy to complete the deployment of the image.











5. Deploy tomcat through script:
1. Deployment-related commands:
kubectl create -f deploy yml file #Create deployment a>
kubectl apply -f deploy yml file #Update deployment configuration (if not created, it will be created)
kubectl get pod [-o wide] #View deployed pod< /span> kind: Deployment apiVersion: extensions/v1beta1 Simplest version example: 2 .yml file writing kubectl logs [-f] pod name#View pod output log
kubectl describe pod pod name#View Pod details name: tomcat-deploy spec: replicas: 2 template: metadata: labels: app: tomcat-cluster spec : containers: - name: tomcat-cluster-container image: tomcat:latest ports: - containerPort: 8080 3. External access to tomcat cluster a. Service service is used to expose applications to the outside world (Service is similar to nginx for load balancing) b. Write the yml of service apiVersion: v1 kind: Service b. Start the service: hostPath: # Host upper path - name: web-app # The name of the mapping must be consistent with the configuration in containers volumes: # Instructions for disk mapping spec: app: tomcat-cluster labels: metadata: template: replicas: 2 spec: name: tomcat-deploy kind: Deployment apiVersion: extensions/v1beta1 b. The mount point deployment is implemented as follows: a. Just like docker's disk mapping, k8s can also specify disk mapping. Combined with the cluster file sharing through nfs in the previous step, we can only modify the file once and all the files in the entire cluster will be updated 5. Deploy the mount point iii. mount 192.168.26.70:/ usr/local/data/www-data /mnt Mount the shared directory on the main server to the /mnt directory ii. showmount -e 192.168.26.70 Check whether there is master server information i. yum install -y nfs-utils c. Configure the nfs configuration on the node server iv. systemctl enable rpcbind.service iii. systemctl enable nfs. service Set the nfs service to start at boot ii. systemctl start rpcbind.service Start rpcbind service i. systemctl start nfs.service Start nfs service (rw, sync) indicates that the read and write permissions of the path are exposed, and it is synchronous read and write 24 is The network mask 192.168.26.70 is the IP address of the host Parameter description: /usr/local/data/www-data specifies the path to be exposed /usr/local/data/www-data 192.168.26.70/24(rw,sync) The content of exports is: vim /etc/ exports ii. Configuration of nfs i. yum install -y nfs-utils rpcbind a. In the k8smaster host Install nfs on the host machine 4. Cluster file sharing From outside the host, you can pass 192.168.26.70: 32500 accesses tomcat. This method directly accesses each node without going through the "load balancing" of the service c. After creating the service, the exposed port is the port of the service container, not the port of the host machine nodePort: 32500 # Specify the port exposed to the host. Through this port, you can access the specified port in the cluster. machine targetPort: 8080 # The port exposed by the container itself - port: 8000 #The port k8s service accessed between the internal services of the cluster The port of the pod ports: app: tomcat-cluster #The selector here must select the label of the container selector: type: NodePort #This represents the NodePort type spec: app: tomcat-service-service labels: name: tomcat-service metadata:











































































path: /mnt # Specific path
containers:
- name: tomcat-cluster-container
image : tomcat:latest
ports:
- containerPort: 8080
volumeMounts: # Directory to be mounted in the container
- name: web-app # Corresponds to the name above
mountPath: /usr/local/tomcat/webapps/ROOT # The mounting directory in the container< a i=9> 6. Use Rinted (port forwarding tool) to provide external Service load balancing support a. Before creating the Service, the type was specified as NodePort, and the externally exposed port was specified as 32500. , you need to modify this configuration when using Rinted: i. The content of the original tomcat-service.yml configuration file is as follows apiVersion: v1 kind: Service metadata: name: tomcat-service labels: app: tomcat-service-service spec: type: NodePort #This represents the NodePort type selector: a> app: tomcat-cluster #The selector here must select the container label ports: – port: 8000 #Cluster internal services The port accessed between the k8s service and the pod port targetPort: 8080 # The port exposed by the container itself nodePort: 32500 # Specify the port exposed to the host through This port can access the specified machine in the cluster ii. Modify the content of tomcat-Service to the following content, that is, comment out the ports exposed by typ:NodePort and nodePort apiVersion: v1 kind: Service metadata: name: tomcat-service labels : app: tomcat-service-service spec: # type: NodePort #This represents the NodePort type a> selector: app: tomcat-cluster #The selector here must select the label of the container ports: – port: 8000 #The port accessed between services within the cluster. The port of the pod where the k8s service is located targetPort: 8080 #The port exposed by the container itself # nodePort: 32500 # Specify the port exposed to the host through which the specified machine in the cluster can be accessed b. kubectl apply -f tomcat-service.yml updates the service configuration c. kubectl describe service tomcat-service View service details Name: tomcat-service Namespace: default Labels : app=tomcat-service-service Annotations: kubectl.kubernetes.io/last-applied-configuration: {“apiVersion”:”v1″,” kind":"Service","metadata":{"annotations":{},"labels":{"app":"tomcat-service-service"},"name":"tomcat-service","namesp… Selector: app=tomcat-cluster Type: ClusterIP IP: 10.101.53.16 Port: 8000/TCP TargetPort: 8080/TCP Endpoints: 10.244.1.8:8080,10.244.2.7:8080 #Information of the two underlying dockers< /span> iii . cd rinetd/ ii. tar -zxvf rinetd.tar.gz Unzip the file i. wget http://www.boutell.com /rinetd/http/rinetd.tar.gz Download the source package (cannot download due to network problems) e. Rinetd installation steps Used in Kubernetes to expose service services to the outside world that can forward source IP port data to the target IP port d. Rinetd is a redirection transmission control protocol tool in the Linux operating system How to map the IP and port in the k8s cluster to the IP and port of the physical machine? —Rineted At this time, the application in the cluster can be accessed through ip+port in the k8s cluster. , such as the information shown above, we can access the tomcat-service service through 10.101.53.16:8000, and tomcat-service will send the request to 10.244.1.8:8080, 10.244.2.7:8080 through the load balancing algorithm. However, since this IP is not a real external network IP, but is only an IP maintained within the k8s cluster, the cluster cannot be accessed through this IP from outside the k8s cluster. Events: Session Affinity: None
























































iv. sed -i 's/65536/65535/g' rinetd.c Modify the rinetd source file and modify the port range allowed for mapping
v. yum install -y gcc installation c language compiler
vi. make&make install
vii. vi /etc/rinetd.conf Create rinetd configuration file
0.0.0.0 8000 10.101.53.16 8000 #0.0.0.0 Specifies that any machine can access the 8000 port of 10.101.53.16 (the IP of tomcat-service in k8s) through port 8000
viii. rinetd -c /etc/rinetd.conf loads the configuration file and makes the configuration effective
f. At this time, you can access the tomcat-service service through the host's ip+8000 port
7. At this point, the entire process of deploying applications through k8s is completed 

Guess you like

Origin blog.csdn.net/bishenghua/article/details/120407778