Deploy k8s on a single machine and deploy nginx applications on kuboard
Deploy k8s on a single machine
1. System configuration modification
1. Close selinux and firewall
setenforce 0
systemctl stop firewdlld
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
systemctl disable firewalld
2. Disable swap
swapoff –a
Open /etc/fstab and comment out the swap line
3. Modify kernel parameters and modules
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
4. Make kernel parameters and modules take effect
sysctl --system
modprobe br_netfilter
lsmod | grep br_netfilter
5. Turn off the swap memory, otherwise the kubelet will not start
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Two, docker installation
The installation of docker must correspond to the version of k8s. You can check the correspondence between the versions online.
1. Query the installed version, if it does not correspond to uninstall it, install the specified version
yum list installed | grep docker
yum remove docker-ce.x86_64 –y
2. Select the version that can be installed in the current warehouse
yum list docker-ce --showduplicates | sort –r
yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7
systemctl start docker
systemctl enable docker
3. Modify the docker mirror warehouse to a domestic warehouse
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": [
"https://3laho3y3.mirror.aliyuncs.com"
]
}
EOF
systemctl restart docker
4. Set up Aliyun yum warehouse and install k8s components
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubelet-1.13* kubeadm-1.13* kubectl-1.13*
systemctl start kubelet
systemctl enable kubelet
5. Download k8s related images and label them
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy:v1.13.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
3. Initialize k8s and network
1. Initialize k8s
kubeadm init --apiserver-advertise-address 119.91.218.139 --kubernetes-version=v1.13.3 --pod-network-cidr=10.100.0.0/16
Common problems encountered during initialization: timeout One of the reasons for this problem is that k8s cannot specify the public network ip when the network is initialized, and needs to specify the internal network ip
Solution 1: kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.100.0.0/16 Do not specify ip
Solution 2: Quickly open another terminal window during initialization, open the file vim /etc/kubernetes/manifests/etcd.yaml and modify the following two
This is because k8s specifies the external network ip that etcd is bound to by default during initialization, and a yaml file will be generated during initialization. At this time, k8s will automatically update the yaml file and reinitialize it if it is quickly modified.
The above is my solution. In addition, please refer to the following link for common initialization errors:
2. Then execute the following command
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3. Install the container network plugin
kubectl apply -f https://gitee.com/www.freeclub.com/blog-images/raw/master/source/kube-flannel.yml
If the above installation fails then the following installs:
View /opt/cni/bin
Found missing flannel files
Download the cni network plugin
tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz
The decompressed files are placed in /opt/cni/bin, replacing the files that are not in it
4. Because it is a stand-alone deployment, the master node at this time is tainted and cannot be used as a deployment node.
stain removal
kubectl get nodes found that the master node k8s-master is in the notready state
View taint policy kubectl get no -o yaml | grep taint -A 5
It is found that the master nodes are all NoSchedule
5. Remove the taint so that the master node can deploy pods
kubectl taint nodes --all node-role.kubernetes.io/master- #Remove all taints
kubectl taint nodes k8s-master (specified node) node-role.kubernetes.io/master- #Remove the specified node taint
# Check again, if there is no output, the taint removal is successful
kubectl get no -o yaml | grep taint -A 5
6. Check whether the pod node is successfully started, and all nodes are running
kubectl get pods --all-namespaces
4. Kuboard deploys simple application nginx
1. kuboard installation
docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 80:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://119.91.218.139:80" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
eipwork/kuboard:v3
2. Access and log in to kuboard
Username: admin
Password: kuboard123
3. Add import cluster
Select Add Cluster
Choose to use ServiceAccount kuboard-admin and click the default namespace to switch to the default namespace home page, as shown below:
Create workload
And fill out the form as follows:
field name |
Fill in the content |
Remark |
Service type |
Deployment |
|
service layering |
presentation layer |
Kuboard uses this field to determine at which layer of the microservice architecture to display the deployment |
service name |
nginx |
The prefix of the service layer + service name forms the final K8S Deployment name |
Duplicate number |
1 |
replicas |
Click the Container Information Tab page, and click the Add Work Container button in the Container Information Tab page , as shown in the following figure:
And fill out the form as follows:
field name |
Fill in the content |
Remark |
container name |
nginx |
|
mirror image |
nginx:1.7.9 |
|
crawling strategy |
Always |
Every time a pod is created , try to grab the image |
Ports |
TCP : 80 |
The container group listens on TCP port 80 |
Save, apply, click to access