k8s集群搭建详细步骤

1、k8s基础配置

1、基础环境配置

// 首先为各个节点设置主机名
hostnamectl set-hostname  xxx
​
​
// 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
​
// 关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab
​
// 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
​
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
​
复制代码

2、安装docker

3、安装kubelet、kubeadm、kubect

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
​
​
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
​
sudo systemctl enable --now kubelet
复制代码

4、使用kubeadm引导集群

1、下载各个机器需要的镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh
复制代码
2、初始化主节点
// 所有机器添加master域名映射,以下需要修改为自己的
echo "172.31.0.2  cluster-endpoint" >> /etc/hosts
​
​
​
// 主节点初始化
kubeadm init \
--apiserver-advertise-address=172.31.0.2 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
​
// 所有网络范围不重叠
​
复制代码

将主节点初始化的输出信息进行保存,后面会用

Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
Alternatively, if you are the root user, you can run:
​
  export KUBECONFIG=/etc/kubernetes/admin.conf
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
​
  kubeadm join cluster-endpoint:6443 --token 5mixrl.cn2aiv0g49bfe8dr \
    --discovery-token-ca-cert-hash sha256:f3fe28a6571db886c2e2d417ca461496eef69661406de801afe6dbe273817f82 \
    --control-plane 
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join cluster-endpoint:6443 --token 5mixrl.cn2aiv0g49bfe8dr \
    --discovery-token-ca-cert-hash sha256:f3fe28a6571db886c2e2d417ca461496eef69661406de801afe6dbe273817f82
复制代码
// 根据提示信息运行
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码
// 安装网络插件
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
​
​
// 查看集群所有节点
kubectl get nodes
​
// 根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml
​
// 查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
// 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A
​
​
​
复制代码
3、node加入命令失效
// 使用这个命令可以重新生成token
kubeadm token create --print-join-command
复制代码
// 使用这个命令可以重新生成token
kubeadm token create --print-join-command
复制代码

5、部署dashboard

// 安装可视化界面
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
​
// 查看安装是否成功 
kubectl get pod -A
​
// type: ClusterIP 改为 type: NodePort 使得集群内的所有节点都可以访问控制面板
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
 
kubectl get svc -A |grep kubernetes-dashboard
//  找到端口,在安全组放行
​
[root@master ~]# kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.96.4.178     <none>        8000/TCP                 8m15s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.96.155.180   <none>        443:30692/TCP            8m15s
​
https://139.198.27.123:30692
 
复制代码
// 创建访问者账号
// 新建一个文件 dash.yaml ,将下面的内容添加到文件中
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
​
// 执行该文件
kubectl apply -f dash.yaml
​
// 获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
​
​
eyJhbGciOiJSUzI1NiIsImtpZCI6IlhwNFp2TVdxUXYzT05LMGNneWJiQUxVeVF5Ulp2RjI5eFVxVmdvaFI0T3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRremg2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlNGVmMjNkZi1jYWU1LTQ2NmUtYTNjYS0yYjFlMzYzMWVmZTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Omx0M2i3SoUT2wDMhy50Kq92hfb4nJ1nNUqLOHYOemBuJVjRTuuEqArRqHAJtpmfe7RzO5_kigBdd6Tu2nMHvCJ8praK2BKwrx_In1tvv4wHZuWim8vzHh_IW8ywuiJfJiPWtVlOfQt03cmChj2QFAm-Zibi1KwS-w3laupavnS-O5_YOUn1b4WfLxzIGuyX5TPZs959WDUE-AGbkV_jIgjD3_b7K5S_SzxnW5a_gcGILQIU89MGawar15MefCCdzIynnNi-KbrmGPIBKXIKuFAq-Em7b0J8Hphe9jrEtqUF246D5xtFanu5yHJ-tp0UWkVGILYGFtnspFGWFi78Xw
复制代码

2、k8s实战操作

1、资源创建方式

  • 命令行
  • yaml文件

2、namespace

命名空间,主要用来隔离资源

// 方式一:命令行
// 创建命名空间
kubectl create ns hello //创建一个名为hello的命名空间
//删除命名空间
kubectl delete ns hello 
​
// 方式二:yaml 文件  hello.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: hello
  
// 创建命名空间
kubectl apply -f hello.yaml
//删除命名空间
kubectl delete -f hello.yaml
复制代码

3、pod操作

pod可以简单理解为容器外面套了一个壳,但是功能更加强大

// 创建pod的方式   创建一个以nginx为镜像名为mynginx的pod
kubectl run mynginx --image=nginx
​
//也可以使用yaml方式创建,执行这个yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: mynginx
  name: mynginx
#  namespace: default
spec:
  containers:
  - image: nginx
    name: mynginx
​
​
// 获取默认空间域的pod
kubectl get pod
​
// 获取pod的描述信息
kubectl describle pod mynginx(pod名称)
​
// 对于每个pod,k8s都会分配一个ip,可以通过查看pod的详细信息知道
kubectl get pod -o wide
[root@master ~]# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
mynginx   1/1     Running   0          2m31s   192.168.104.2   node2   <none>           <none>
​
// 通过ip+端口,查看应用,集群内的所有节点都可以通过这个ip来访问这个应用
curl 192.168.104.2:80
​
// 进入pod命令
kubectl exec-it mynginx -- /bin/bash
// 退出
 exit
 
// 删除pod
 kubectl delete pod mynginx
 // 或者通过yaml文件删除pod
复制代码

一个pod里面可以部署多个应用,通过yaml文件方式创建

// 创建yaml文件 app.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp
  name: myapp
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat
 
// 创建pod
kubectl apply -f app.yaml 
​
[root@master ~]# kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE    NOMINATED NODE   READINESS GATES
myapp     2/2     Running   0          6m35s   192.168.166.130   node1   <none>           <none>
mynginx   1/1     Running   0          31m     192.168.104.2     node2   <none>           <none>
​
[root@master ~]# curl 192.168.166.130:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.68</h3></body></html>
​
​
// 监控什么命令
watch -n 1  kubectl get pod
复制代码

4、Deployment使用

在实际应用中,不会直接操作pod,而是操作deploy

使用deployment,使pod拥有多副本,自愈,扩缩容的能力

// 创建deployment
kubectl create deployment mytomcat --image=tomcat:8.5.68
​
//如果杀死对应pod,将会自愈
​
//杀死deployment
kubectl delete deploy mynginx
​
// 多副本 --replicas 指定副本数
[root@master ~]# kubectl create deployment mynginx --image=nginx --replicas=3
​
 //多副本 采用yaml文件 mynginx.yaml
 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx
​
// 启动deployment
kubectl apply -f mynginx.yaml
​
 
// 扩缩容
[root@master ~]# kubectl scale --replicas=5 deployment/my-dep
// 也可以在图形化界面进行扩缩容操作
​
// 修改 replicas
kubectl edit deployment my-dep
​
// 版本更新
kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record
​
// 版本回退
// 历史记录
kubectl rollout history deployment/my-dep
​
​
// 查看某个历史详情
kubectl rollout history deployment/my-dep --revision=2
​
// 回滚(回到上次)
kubectl rollout undo deployment/my-dep
​
// 回滚(回到指定版本)
kubectl rollout undo deployment/my-dep --to-revision=2
复制代码

5、Service

将一组 Pods 公开为网络服务的抽象方法。

#暴露Deploy
kubectl expose deployment my-dep --port=8000 --target-port=80
​
#使用标签检索Pod
kubectl get pod -l app=my-dep
复制代码
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  selector:
    app: my-dep
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
复制代码

ClusterIP

# 等同于没有--type的
kubectl expose deployment my-dep --port=8000 --target-port=80 --type=ClusterIP
复制代码
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: ClusterIP
复制代码

NodePort

kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePort
复制代码
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: NodePort
复制代码

6、ingress

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
​
#修改镜像
vi deploy.yaml
#将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
​
# 检查安装的结果
kubectl get pod,svc -n ingress-nginx
​
# 最后别忘记把svc暴露的端口要放行
复制代码

路径重写

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx(/|$)(.*)"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000
复制代码

流量限制

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "haha.atguigu.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
复制代码

apiVersion: apps/v1 PV&PVC

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置

PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

静态

#nfs主节点
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03
复制代码

创建pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 172.31.0.4
复制代码

参照:www.yuque.com/leifengyang…

猜你喜欢

转载自juejin.im/post/7078481142432661534
今日推荐