Ubuntu16.04 installation K8s step and step on pit recording [good]

Article Directory
Environmental Information
Installation Steps
System Configuration modify the
installation docker
installation kubectl, kubelet, kubeadm
configuration Master
Configuration Node
Deployment Results check
K8S deployment mysql learning
new mysql-rc.yaml
create mysql-svc.yaml
installation
K8S deploy JAVA applications
to create deployment
to create a service
update deployment
of other command
reference
K8S Deployment command
environment information
name: version
Docker 18.06.1-ce
operating system Ubuntu16.04
K8S v1.13.2
machine information

IP-action component
10.2.14.78 Master
10.2.14.79 the Node
10.2.14.80 the Node
mounting step
system configuration modification
disabled swap

swapoff -a

While the / etc / fstab swap line that contains the deleted record.

Turn off the firewall

systemctl stop firewalld
systemctl disable firewalld
1
2
禁用Selinux

the install SELinux-utils APT
the setenforce 0
. 1
2
hostname ip and arrangement of each host.
The total of actual combat use three hosts, one for Master deployment, the two leaders were node1 and node2. Correspondence between the host name and IP are as follows:

10.2.14.78 K8S-m-wangcf
wangcf-N1-K8S 10.2.14.79
wangcf-N2-K8S 10.2.14.80
. 1
2
. 3
colleagues each machine / etc / hosts configured as follows

Wangcf-m-K8S 10.2.14.78
10.2.14.79 wangcf-N1-K8S
10.2.14.80 wangcf-N2-K8S
. 1
2
. 3
mounted docker
perform the following operations are in the Master node and Node

Install tools
APT-GET Update && -y install APT APT-GET-Transport-HTTPS curl
1
Add Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | APT the Add-Key -
1
Ann transfer docker
APT-GET -y install docker.io
1
View docker version
root @ Ubuntu: ~ # docker version
Client:
version: 18.06.1-ce
API version: 1.38
Go version: go1.10.4
Git the commit: e68fc7a
Built: Thu 15 21:12:47 2018 nov
OS / Arch: Linux / AMD64
Experimental: false

Server:
Engine:
Version: 18.06.1-ce
API Version: 1.38 (Minimum Version 1.12)
Go Version: go1.10.4
Git the commit: e68fc7a
Built: Sun Nov 11 21:53:22 2018
OS / Arch: Linux / AMD64
Experimental: to false
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
start-Service Docker
systemctl enable Docker
systemctl start Docker
systemctl Status Docker
. 1
2
. 3
using the accelerator aliyun

Due to network reasons, when we pull Image, downloaded from Docker Hub will be very slow.

Modify the file

vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://alzgoonw.mirror.aliyuncs.com"],
"live-restore": true
}
1
2
3
4
5
重起docker服务

daemon-reload systemctl
systemctl the restart Docker
. 1
2
Installation kubectl, kubelet, kubeadm
perform the following operations are in the Master node and Node

Then add keys
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | APT the Add-Key -
1
has been tested here may be an error: gpg: no valid OpenPGP data found

Note: You need to solve the following two commands: curl -O https://packages.cloud.google.com/apt/doc/apt-key.gpg apt-key.gpg first save a file, and then through apt- key add apt-key.gpg to load.

Add Kubernetes source software

<< the EOF CAT> /etc/apt/sources.list.d/kubernetes.list
the deb-xenial main http://apt.kubernetes.io/ Kubernetes
the EOF
. 1
2
. 3
. 4
above official sources, need to modify the internal barrier as follows

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
EOF
1
2
3
安装

# apt-get update && apt-get install -y kubelet kubeadm kubectl
# systemctl enable kubelet
1
2
– 修改源—

Question: apt-get update error timeout, the wall was. The need to modify the apt-get sources using source ustc

vim /etc/apt/sources.list.d/kubernetes.list

Increased as follows, and then reinstall

# Deb-http://apt.kubernetes.io/ kubernetes great main
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes great-main
1
2
注: ubuntu16.04代号为great

Master disposed
in / etc / profile below to add the following environmental variables

KUBECONFIG = Export / etc / Kubernetes / the admin.conf
. 1
# restart kubelet
systemctl daemon-reload
systemctl kubelet the restart
. 1
2
. 3
executed on the master node

--pod the init-Network-kubeadm CIDR = 10.244.0.0 / 16 = --apiserver-advertise-address 10.2.14.78 --kubernetes-Version-Preflight = v1.13.2 --ignore-errors = Swap
. 1
-POD-Network- cidr means available IP addresses in the pod configuration node, this is the internal IP

-apiserver-advertise-address for the master's IP address

-kubernetes-version can be viewed through kubectl version

Unfortunately error, k8s.gcr.io wall is a mirror download fails

[Preflight] by You CAN Also the perform a using the this beforehand in Action 'kubeadm pull config ImagesRF Royalty Free'
error Execution Phase Preflight: [Preflight] s Some fatal errors occurred:
[ERROR ImagePull]: failed The k8s.gcr.io/kube-apiserver to pull Image: v1.13.2: Output: error Response from daemon: the Get https://k8s.gcr.io/v2/: NET / HTTP: Waiting for the while Connection Request canceled (the while Client.Timeout exceeded Number AWAITING headers)
, error: Exit Status. 1
........
1
2
3
4
5
based on error information, find a mirror site in the domestic station (docker need to configure Ali cloud mirror repository)

pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2 Docker
Docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2
Docker pull registry.cn- hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2
Docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
Docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2. 24
Docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
Docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2
. 1
2
. 3
. 4
. 5
. 6
. 7
these images to re-tag it.

docker day registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2
docker day registry.cn-hangzhou.aliyuncs.com/ google_containers / kube-scheduler: v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2
docker day registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2 k8s.gcr.io/kube -proxy: v1.13.2
docker day registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker day registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2. 24 k8s.gcr.io/etcd:3.2.24
docker day registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
Tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2 Docker
. 1
2
. 3
. 4
. 5
. 6
. 7
re-executed

--pod the init-Network-kubeadm CIDR = 10.244.0.0 / 16 --apiserver-advertise-address-Version = = = 10.2.14.78 --kubernetes Preflight v1.13.2 --ignore-Swap-errors
. 1
the following output, wherein the last line is the command master node joins the cluster needs

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

9f93785608c9a9de3e5d74e9ed30b8302691abfee7efd946a8c1b80d8582fe92: kubeadm the Join 10.2.14.78:6443 --token h7u22o.nk23ias5f1ft8hj9 --discovery-token-CA-CERT-SHA256 the hash
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
After installing Master node, View node information (kubectl get nodes) will find state of the node is noready. View noready reason found to be due cni plug-in is not configured. In fact, this is because the network has not been configured. You can configure multiple networks, where authors choose the most long-term fannel network configuration.

Apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl
. 1
Configuration Node
Run the following commands in each node the node (corresponding to the command master configuration kubeadm join return), was added master clusters

10.2.14.78:6443 --token h7u22o.nk23ias5f1ft8hj9 --discovery the Join kubeadm-token-CA-CERT the hash-SHA256: 9f93785608c9a9de3e5d74e9ed30b8302691abfee7efd946a8c1b80d8582fe92
. 1
Check nodes status master, Node status is NotReady

@-K8S-wangcf the root m: ~ # kubectl GET Nodes
NAME VERSION the STATUS the ROLES of AGE
wangcf the Ready-m-K8S Master 20m v1.13.2
wangcf-N1-K8S NotReady <none> 8m21s v1.13.2
wangcf-N2-K8S NotReady <none > 2m40s v1.13.2
. 1
2
. 3
. 4
. 5
See pod state, part of the normal service does not start, because each node also lacks mirror, need to manually download, download to download the image in accordance with the master manual manner

root@wangcf-k8s-m:~# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-hpbbh 0/1 ContainerCreating 0 18m
kube-system coredns-86c58d9df4-qj56q 0/1 ContainerCreating 0 18m
kube-system etcd-wangcf-k8s-m 1/1 Running 2 17m
kube-system kube-apiserver-wangcf-k8s-m 1/1 Running 2 17m
kube-system kube-controller-manager-wangcf-k8s-m 1/1 Running 2 17m
kube-system kube-flannel-ds-amd64-bskks 0/1 Init:0/1 0 2m34s
kube-system kube-flannel-ds-amd64-rdnw2 1/1 Running 0 2m34s
kube-system kube-flannel-ds-amd64-sdbxj 0/1 Init:0/1 0 55s
kube-system kube-proxy-6h6rv 0/1 ContainerCreating 0 55s
kube-system kube-proxy-fsfwq 0/1 ContainerCreating 0 6m36s
kube-system kube-proxy-z7dqx 1/1 Running 2 18m
kube-system kube-scheduler-wangcf-k8s-m 1/1 Running 2 17m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
部署结果检查
root@wangcf-k8s-m:~# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-9ptww 1/1 Running 0 4m9s
kube-system coredns-86c58d9df4-xg78d 1/1 Running 0 4m9s
kube-system etcd-wangcf-k8s-m 1/1 Running 2 24m
kube-system kube-apiserver-wangcf-k8s-m 1/1 Running 2 24m
kube-system kube-controller-manager-wangcf-k8s-m 1/1 Running 2 24m
kube-system kube-flannel-ds-amd64-bskks 0/1 Init:0/1 0 9m42s
kube-system kube-flannel-ds-amd64-rdnw2 1/1 Running 0 9m42s
kube-system kube-flannel-ds-amd64-sdbxj 0/1 Init:0/1 0 8m3s
kube-system kube-proxy-6h6rv 1/1 Running 0 8m3s
kube-system kube-proxy-fsfwq 1/1 Running 0 13m
kube-system kube-proxy-z7dqx 1/1 Running 2 25m
kube-system kube-scheduler-wangcf-k8s-m 1/1 Running 2 24m
root@wangcf-k8s-m:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
wangcf-k8s-m Ready master 26m v1.13.2
wangcf-k8s-n1 NotReady <none> 14m v1.13.2
wangcf-k8s-n2 Ready <none> 8m21s v1.13.2
root@wangcf-k8s-m:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
wangcf-k8s-m Ready master 26m v1.13.2
wangcf-k8s-n1 NotReady <none> 14m v1.13.2
wangcf-k8s-n2 Ready <none> 8m24s v1.13.2
root@wangcf-k8s-m:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
wangcf-k8s-m Ready master 26m v1.13.2
wangcf-k8s-n1 NotReady <none> 14m v1.13.2
wangcf-k8s-n2 Ready <none> 8m31s v1.13.2
root@wangcf-k8s-m:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
wangcf-k8s-m Ready master 26m v1.13.2
wangcf-k8s-n1 Ready <none> 14m v1.13.2
wangcf-k8s-n2 Ready <none> 9m5s v1.13.2
root@wangcf-k8s-m:~# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
20 is
21 is
22 is
23 is
24
25
26 is
27
28
29
30
31 is
32
33 is
34 is
35
36
37 [
38 is
39
40
K8S deployment mysql learning
new mysql-rc.yaml
apiVersion: V1
kind: ReplicationController
Metadata:
name: RC-MySQL
Labels:
name: RC-MySQL
spec:
Replicas:. 1
selector:
name: mysql-pod
template:
metadata:
labels:
name: mysql-pod
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
创建mysql-svc.yaml
[root @ K8S-Master ~] # CAT MySQL-svc.yaml
apiVersion: v1
kind: Service
the Metadata:
name: MySQL-svc
Labels:
name: MySQL-svc
spec:
of the type: NodePort
the ports:
- Port: 3306
Protocol: TCP
TARGETPORT : 3306
name: HTTP
nodePort: 30000
Selector:
name: POD-mysql
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
installed
k8s execution file, downloading and running mysqlr vessel image mysql

[K8S the root-Master @ ~] -f # kubectl Create MySQL-rc.yaml
replicationcontroller "MySQL-RC" Created
[K8S the root-Master @ ~] -f # kubectl Create MySQL-svc.yaml
-Service "MySQL-SVC" Created
. 1
2
. 3
. 4
in which one node to see node mysql container instance is started

@-K8S-wangcf the root N1: ~ # Docker PS
CONTAINER ID PORTS the STATUS the IMAGE CREATED the COMMAND NAMES
338cd4b675ab MySQL "Docker-entrypoint.s ..." 15 hours 15 hours ago Member k8s_mysql_mysql Up-RC-d5zht_default_f55914bc-1a49-
. 1
2
. 3
into the vessel to see to mysql version is 8.0.13

root@wangcf-k8s-n1:~# docker exec -it 338cd4b675ab bash
root@mysql-rc-d5zht:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 8.0.13 MySQL Community Server - GPL
1
2
3
4
5
6
设置root远程访问

$mysql -u root -p
Enter password:
mysql> use mysql;
mysql> GRANT ALL ON *.* TO 'root'@'%';
Query OK, 0 rows affected (0.04 sec)

mysql> the ALTER the USER 'the root' @ '%' the WITH mysql_native_password the IDENTIFIED BY 'password';
Query the OK, 0 rows affected (0.01 sec)
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
last vessel is connected mysql mysql client instance

IP :( any master or node node IP)

Username: root

Password: password of [password]

Port: 30000 [Port settings]

K8S deploy JAVA application
using deployment deployed java application name, the application for the demo.

By docker pull wangchunfa / demo can be downloaded to change the test application, is a spring boot project, external exposure port is 8771.

Mirror building docker see separate "project to deploy Spring boot Docker environment" Bowen

Create a deployment
New File demo_deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: wangcf-demo
image: wangchunfa/demo:latest
ports:
- containerPort: 8771
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
注意:apiVersion写apps/v1

ApiVsersion before version 1.6: Extensions / v1beta1
version 1.6 to version 1.9 between: apps / v1beta1
after version 1.9: Apps / v1
1
2
3
Creating a deployment to deploy and view status, and ultimately you can see our applications are deployed go up

@-K8S-wangcf the root m: ~ / demo_deployment # kubectl Create -f demo_deployment.yaml --record
deployment.apps / Deployment-Demo Created
the root-K8S-m @ wangcf: ~ / demo_deployment GET # kubectl Deployment
NAME the TO-the UP READY AVAILABLE of AGE -date
Demo Deployment 1/1. 1. 1-10s
the root-K8S-m @ wangcf: ~ / demo_deployment kubectl GET RS #
NAME DESIRED the CURRENT READY of AGE
Demo-Deployment-9c754c4d9 10s. 1. 1. 1
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
Run kubectl get pods -o wide, pay attention to IP column displays the IP address of the internal network POD, rather than the IP address of Node

root@wangcf-k8s-m:~/demo_deployment# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-deployment-9c754c4d9-zp8wl 1/1 Running 0 69s 10.244.1.7 wangcf-k8s-n1 <none> <none>
mysql-rc-d5zht 1/1 Running 0 10d 10.244.1.2 wangcf-k8s-n1 <none> <none>
1
2
3
4
测试应用,正常返回。其中

@-K8S-wangcf the root N1: ~ # curl http://10.244.1.7:8771/api/v1/product/find?id=2
{ "ID": 2, "name": "refrigerator data from port = 8771 ",". price ": 5342," Store ": 19}
1
2
create a service
using expose quick deployment

kubectl expose deployment demo-deployment --type=NodePort --name=demo-svc

root@wangcf-k8s-m:~/demo_deployment# kubectl expose deployment demo-deployment --type=NodePort --port=8771 --protocol=TCP --target-port=30001 --name=demo-svc
service/demo-svc exposed
root@wangcf-k8s-m:~/demo_deployment# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-svc NodePort 10.107.171.26 <none> 8771:31538/TCP 6s
1
2
3
4
5
–port=8771 容器暴露的端口

-target-port = 30002 service provide external access to the port, the moment can not specify a port

-name = demo-svc Specifies the service name

-protocol = application access protocol within the TCP external services exposed container

Test application access, success!

@-K8S-wangcf the root m: ~ / demo_deployment curl http://10.2.14.78:30272/api/v1/product/find?id=2 #
{ "ID": 2, "name": "refrigerator data from port 8771 = ",". price ": 5342," Store ": the root. 19} @ wangcf-K8S-m: ~ / # demo_deployment
. 1
2
update deployment
will be increased to 2 copies rs

root@wangcf-k8s-m:~# kubectl scale deployment demo-deployment --replicas 2
deployment.extensions/demo-deployment scaled
root@wangcf-k8s-m:~/demo_deployment# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
demo-deployment 2/2 2 2 23m
1
2
3
4
5
其他命令
删除deployment

# kubectl delete deployment demo-deployment
1
查看deployment

DESCRIBE Deployment Demo-kubectl # Deployment
1
View history

@-K8S-wangcf the root m: ~ / demo_deployment rollOut History # kubectl Deployment / Deployment-Demo
deployment.extensions / Demo-Deployment
the REVISION the CHANGE-the CAUSE
. 1 kubectl = Create --filename = demo_deployment.yaml --record to true
. 1
2
. 3
. 4
View details of the individual revision:

root@wangcf-k8s-m:~/demo_deployment# kubectl rollout history deployment demo-deployment --revision=1
deployment.extensions/demo-deployment with revision #1
Pod Template:
Labels: app=demo
pod-template-hash=9c754c4d9
Annotations: kubernetes.io/change-cause: kubectl create --filename=demo_deployment.yaml --record=true
Containers:
wangcf-demo:
Image: wangchunfa/demo:latest
Port: 8771/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
参考
Domestic environment Kubernetes installation and configuration of v1.12.1

kubernetes deployment mysql

K8S Deployment command
----------------
Disclaimer: This article is CSDN blogger original article "Mars candy", and follow CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
Original link: https: //blog.csdn.net/wangchunfa122/article/details/86529406

Guess you like

Origin www.cnblogs.com/ExMan/p/11613750.html