Kubernetes four Kubernetes data storage and security certification

Detailed Explanation of Pod Controller and Service of Kubernetes Three Kubernetes

a data storage

As mentioned earlier, the life cycle of the container may be very short and will be created and destroyed frequently. Then when the container is destroyed, the data stored in the container will also be cleared. This result is undesirable for users in some cases. In order to persist the data of the container, kubernetes introduces the concept of Volume.

Volume is a shared directory in a Pod that can be accessed by multiple containers. It is defined on a Pod, and then mounted to a specific file directory by multiple containers in a Pod. Kubernetes uses Volume to realize the connection between different containers in the same Pod. Data sharing among users and persistent storage of data. The life container of the Volume is not related to the life cycle of a single container in the Pod. When the container is terminated or restarted, the data in the Volume will not be lost.

The volume of kubernetes supports multiple types, and the more common ones are as follows:

  • Simple storage: EmptyDir, HostPath, NFS
  • Advanced storage: PV, PVC
  • Configuration storage: ConfigMap, Secret

1.1 Basic Storage

1.1.1 EmptyDir

EmptyDir is the most basic Volume type, and an EmptyDir is an empty directory on the Host (host).

EmptyDir is created when Pod is assigned to Node. Its initial content is empty, and there is no need to specify the corresponding directory file on the host, because kubernetes will automatically allocate a directory. When Pod is destroyed, the data in EmptyDir will also be deleted . Permanently delete (so generally do not use this method for persistence) . EmptyDir is used as follows:

  • Temporary space, such as a temporary directory that is used when some applications are running, and does not need to be kept permanently
  • The directory where one container needs to get data from another container (multi-container shared directory)

Next, use EmptyDir through a case of file sharing between containers.

Prepare two containers nginx and busybox in a Pod, and then declare a Volume to hang in the directories of the two containers respectively, and then the nginx container is responsible for writing logs to the Volume, and the busybox reads the log content to the console through commands.

insert image description here

Create a volume-emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:  # 将logs-volume挂在到nginx容器中,对应的目录为 /var/log/nginx
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"] # 初始命令,动态读取指定文件中内容
    volumeMounts:  # 将logs-volume 挂在到busybox容器中,对应的目录为 /logs
    - name: logs-volume
      mountPath: /logs
  volumes: # 声明volume, name为logs-volume,类型为emptyDir
  - name: logs-volume
    emptyDir: {
    
    }
# 创建Pod
[root@k8s-master01 ~]# kubectl create -f volume-emptydir.yaml
pod/volume-emptydir created

# 查看pod
[root@k8s-master01 ~]# kubectl get pods volume-emptydir -n dev -o wide
NAME                  READY   STATUS    RESTARTS   AGE      IP       NODE   ...... 
volume-emptydir       2/2     Running   0          97s   10.42.2.9   node1  ......

# 通过podIp访问nginx
[root@k8s-master01 ~]# curl 10.42.2.9
......

# 通过kubectl logs命令查看指定容器的标准输出
[root@k8s-master01 ~]# kubectl logs -f volume-emptydir -n dev -c busybox
10.42.1.0 - - [27/Jun/2021:15:08:54 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

1.1.2 HostPath

As mentioned in the previous section, the data in EmptyDir will not be truly persisted, and it will be destroyed with the end of the Pod. If you want to simply persist the data to the host, you can choose HostPath.

HostPath is to hang an actual directory in the Node host to the Pod for use by the container. This design can ensure that the Pod is destroyed, but the data basis can exist on the Node host.

insert image description here

Create a volume-hostpath.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: volume-hostpath
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"]
    volumeMounts:
    - name: logs-volume
      mountPath: /logs
  volumes:
  - name: logs-volume
    hostPath: 
      path: /root/logs
      type: DirectoryOrCreate  # 目录类型:目录存在就使用,不存在就先创建后使用
关于type的值的一点说明:
    DirectoryOrCreate 目录存在就使用,不存在就先创建后使用
    Directory   目录必须存在
    FileOrCreate  文件存在就使用,不存在就先创建后使用
    File 文件必须存在 
    Socket  unix套接字必须存在
    CharDevice  字符设备必须存在
    BlockDevice 块设备必须存在
# 创建Pod
[root@k8s-master01 ~]# kubectl create -f volume-hostpath.yaml
pod/volume-hostpath created

# 查看Pod
[root@k8s-master01 ~]# kubectl get pods volume-hostpath -n dev -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP             NODE   ......
pod-volume-hostpath   2/2     Running   0          16s   10.42.2.10     node1  ......

#访问nginx
[root@k8s-master01 ~]# curl 10.42.2.10

# 接下来就可以去host的/root/logs目录下查看存储的文件了
###  注意: 下面的操作需要到Pod所在的节点运行(案例中是node1)
[root@node1 ~]# ls /root/logs/
access.log  error.log

# 同样的道理,如果在此目录下创建一个文件,到容器中也是可以看到的

1.1.3 NFS

HostPath can solve the problem of data persistence, but once the Node node fails, if the Pod is transferred to another node, problems will arise again. At this time, a separate network storage system needs to be prepared, and NFS and CIFS are commonly used.

NFS is a network file storage system. You can build an NFS server, and then directly connect the storage in the Pod to the NFS system. In this way, no matter how the Pod is transferred on the node, as long as the connection between the Node and NFS is normal, the data will be saved. can be accessed successfully.

insert image description here

1) First, prepare the nfs server. Here, for simplicity, the master node is directly used as the nfs server

# 在nfs上安装nfs服务
[root@nfs ~]# yum install nfs-utils -y

# 准备一个共享目录
[root@nfs ~]# mkdir /root/data/nfs -pv

# 将共享目录以读写权限暴露给192.168.5.0/24网段中的所有主机
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# more /etc/exports
/root/data/nfs     192.168.5.0/24(rw,no_root_squash)

# 启动nfs服务
[root@nfs ~]# systemctl restart nfs

2) Next, install nfs on each node node, so that the node node can drive the nfs device

# 在node上安装nfs服务,注意不需要启动(只是需要里边的驱动设备)
[root@k8s-master01 ~]# yum install nfs-utils -y

3) Next, you can write the pod configuration file and create volume-nfs.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-nfs
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"] 
    volumeMounts:
    - name: logs-volume
      mountPath: /logs
  volumes:
  - name: logs-volume
    nfs:
      server: 192.168.5.6  #nfs服务器地址
      path: /root/data/nfs #共享文件路径

4) Finally, run the pod and observe the results

# 创建pod
[root@k8s-master01 ~]# kubectl create -f volume-nfs.yaml
pod/volume-nfs created

# 查看pod
[root@k8s-master01 ~]# kubectl get pods volume-nfs -n dev
NAME                  READY   STATUS    RESTARTS   AGE
volume-nfs        2/2     Running   0          2m9s

# 查看nfs服务器上的共享目录,发现已经有文件了
[root@k8s-master01 ~]# ls /root/data/
access.log  error.log

1.2 Advanced Storage

We have already learned how to use NFS to provide storage. At this time, users are required to build an NFS system and configure nfs in yaml. Since there are many storage systems supported by kubernetes, it is obviously unrealistic to require customers to master all of them. In order to shield the details of the underlying storage implementation and facilitate users, kubernetes introduces PV and PVC resource objects.

PV (Persistent Volume) means persistent volume, which is an abstraction of the underlying shared storage. In general, PV is created and configured by the kubernetes administrator, which is related to the underlying specific shared storage technology, and the docking with shared storage is completed through plug-ins.

PVC (Persistent Volume Claim) means a persistent volume claim, which is a declaration of the user's storage requirements. In other words, PVC is actually a resource request application sent by the user to the kubernetes system.

insert image description here

After using PV and PVC, the work can be further subdivided:

  • Storage: Maintenance by storage engineers
  • PV: kubernetes administrator maintenance
  • PVC: kubernetes user maintenance

1.2.1 PV

PV is an abstraction of storage resources, the following is the resource manifest file:

apiVersion: v1  
kind: PersistentVolume
metadata:
  name: pv2
spec:
  nfs: # 存储类型,与底层真正存储对应
  capacity:  # 存储能力,目前只支持存储空间的设置
    storage: 2Gi
  accessModes:  # 访问模式
  storageClassName: # 存储类别
  persistentVolumeReclaimPolicy: # 回收策略

Description of key configuration parameters of PV:

  • storage type

    The type of underlying actual storage, kubernetes supports multiple storage types, and the configuration of each storage type is different

  • Storage capacity (capacity)

Currently only supports the setting of storage space (storage=1Gi), but the configuration of indicators such as IOPS and throughput may be added in the future

  • Access Modes (accessModes)

    It is used to describe the access rights of user applications to storage resources. The access rights include the following methods:

    • ReadWriteOnce (RWO): read and write permissions, but can only be mounted by a single node
    • ReadOnlyMany (ROX): read-only permission, can be mounted by multiple nodes
    • ReadWriteMany (RWX): read and write permissions, can be mounted by multiple nodes

    需要注意的是,底层不同的存储类型可能支持的访问模式不同

  • Reclaim Policy (persistentVolumeReclaimPolicy)

    When the PV is no longer used, how to deal with it. Three strategies are currently supported:

    • Retain (Retain) Retain data, requiring administrators to manually clean up data
    • Recycle (recycle) Clear the data in the PV, the effect is equivalent to executing rm -rf /thevolume/*
    • Delete (Delete) The back-end storage connected to the PV completes the deletion of the volume. Of course, this is common in the storage services of cloud service providers.

    需要注意的是,底层不同的存储类型可能支持的回收策略不同

  • storage class

    PV can specify a storage class through the storageClassName parameter

    • A PV with a specific category can only be bound to a PVC that has requested that category
    • A PV without a category can only be bound to a PVC that does not request any category
  • status

    In the life cycle of a PV, it may be in 4 different stages:

    • Available (available): Indicates that it is available and has not been bound by any PVC
    • Bound (bound): Indicates that the PV has been bound by the PVC
    • Released: Indicates that the PVC has been deleted, but the resource has not yet been reclaimed by the cluster
    • Failed: Indicates that the automatic recycling of the PV failed

experiment

Use NFS as storage to demonstrate the use of PV, and create 3 PVs, corresponding to the 3 exposed paths in NFS.

  1. Prepare the NFS environment
# 创建目录
[root@nfs ~]# mkdir /root/data/{pv1,pv2,pv3} -pv

# 暴露服务
[root@nfs ~]# more /etc/exports
/root/data/pv1     192.168.5.0/24(rw,no_root_squash)
/root/data/pv2     192.168.5.0/24(rw,no_root_squash)
/root/data/pv3     192.168.5.0/24(rw,no_root_squash)

# 重启服务
[root@nfs ~]#  systemctl restart nfs
  1. Create pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv1
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv1
    server: 192.168.5.6

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv2
spec:
  capacity: 
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv2
    server: 192.168.5.6
    
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv3
spec:
  capacity: 
    storage: 3Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv3
    server: 192.168.5.6
# 创建 pv
[root@k8s-master01 ~]# kubectl create -f pv.yaml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created

# 查看pv
[root@k8s-master01 ~]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES  RECLAIM POLICY  STATUS      AGE   VOLUMEMODE
pv1    1Gi        RWX            Retain        Available    10s   Filesystem
pv2    2Gi        RWX            Retain        Available    10s   Filesystem
pv3    3Gi        RWX            Retain        Available    9s    Filesystem

1.2.2 PVC

PVC is an application for resources (pvc applies for pv, then


Use pv in pod) to declare demand information for storage space, access mode, and storage class. Here is the resource manifest file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: dev
spec:
  accessModes: # 访问模式
  selector: # 采用标签对PV选择
  storageClassName: # 存储类别
  resources: # 请求空间
    requests:
      storage: 5Gi

Description of key configuration parameters of PVC:

  • Access Modes (accessModes)

Used to describe the access permissions of user applications to storage resources

  • Selection criteria (selector)

    Through the setting of the Label Selector, the PVC can be screened for the existing PV in the system

  • storage class (storageClassName)

    When defining a PVC, you can set the required back-end storage category. Only PVs with this class can be selected by the system.

  • Resource request (Resources)

    Describes a request for a storage resource

experiment

  1. Create pvc.yaml and apply for pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
# 创建pvc
[root@k8s-master01 ~]# kubectl create -f pvc.yaml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created

# 查看pvc
[root@k8s-master01 ~]# kubectl get pvc  -n dev -o wide
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
pvc1   Bound    pv1      1Gi        RWX                           15s   Filesystem
pvc2   Bound    pv2      2Gi        RWX                           15s   Filesystem
pvc3   Bound    pv3      3Gi        RWX                           15s   Filesystem

# 查看pv
[root@k8s-master01 ~]# kubectl get pv -o wide
NAME  CAPACITY ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM       AGE     VOLUMEMODE
pv1    1Gi        RWx        Retain          Bound    dev/pvc1    3h37m    Filesystem
pv2    2Gi        RWX        Retain          Bound    dev/pvc2    3h37m    Filesystem
pv3    3Gi        RWX        Retain          Bound    dev/pvc3    3h37m    Filesystem   
  1. Create pods.yaml, use pv
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: dev
spec:
  containers:
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","while true;do echo pod1 >> /root/out.txt; sleep 10; done;"]
    volumeMounts:
    - name: volume
      mountPath: /root/
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: pvc1
        readOnly: false
---
apiVersion: v1
kind: Pod
metadata:
  name: pod2
  namespace: dev
spec:
  containers:
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","while true;do echo pod2 >> /root/out.txt; sleep 10; done;"]
    volumeMounts:
    - name: volume
      mountPath: /root/
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: pvc2
        readOnly: false
# 创建pod
[root@k8s-master01 ~]# kubectl create -f pods.yaml
pod/pod1 created
pod/pod2 created

# 查看pod
[root@k8s-master01 ~]# kubectl get pods -n dev -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE   
pod1   1/1     Running   0          14s   10.244.1.69   node1   
pod2   1/1     Running   0          14s   10.244.1.70   node1  

# 查看pvc
[root@k8s-master01 ~]# kubectl get pvc -n dev -o wide
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES      AGE   VOLUMEMODE
pvc1   Bound    pv1      1Gi        RWX               94m   Filesystem
pvc2   Bound    pv2      2Gi        RWX               94m   Filesystem
pvc3   Bound    pv3      3Gi        RWX               94m   Filesystem

# 查看pv
[root@k8s-master01 ~]# kubectl get pv -n dev -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM       AGE     VOLUMEMODE
pv1    1Gi        RWX            Retain           Bound    dev/pvc1    5h11m   Filesystem
pv2    2Gi        RWX            Retain           Bound    dev/pvc2    5h11m   Filesystem
pv3    3Gi        RWX            Retain           Bound    dev/pvc3    5h11m   Filesystem

# 查看nfs中的文件存储
[root@nfs ~]# more /root/data/pv1/out.txt
node1
node1
[root@nfs ~]# more /root/data/pv2/out.txt
node2
node2

1.2.3 Life cycle

There is a one-to-one correspondence between PVC and PV, and the interaction between PV and PVC follows the following life cycle:

  • Resource provisioning : administrators manually create underlying storage and PVs

  • Resource binding : the user creates a PVC, and kubernetes is responsible for finding and binding the PV according to the declaration of the PVC

    After the user defines a PVC, the system will select a satisfying condition among the existing PVs according to the PVC's request for storage resources.

    • Once found, bind the PV to a user-defined PVC, and the user's application can use this PVC
    • If not found, the PVC will be in the Pending state indefinitely until the system administrator creates a PV that meets its requirements

    Once a PV is bound to a PVC, it will be exclusively occupied by this PVC and cannot be bound to other PVCs.

  • Resource usage : users can use pvc in the pod like a volume

    Pod uses the definition of Volume to mount PVC to a certain path in the container for use.

  • Resource release : the user deletes pvc to release pv

    When the storage resource is used up, the user can delete the PVC, and the PV bound to the PVC will be marked as "released", but it cannot be bound to other PVCs immediately. Data written through a previous PVC may still be left on the storage device, and the PV can only be used again after being cleared.

  • Resource recycling : kubernetes recycles resources according to the recycling policy set by pv

    For PV, the administrator can set recycling policy, which is used to set how to deal with the remaining data after the PVC bound to it releases resources. Only when the storage space of PV is reclaimed can it be bound and used by new PVCs

insert image description here

1.3 Configuration storage

1.3.1 ConfigMap

ConfigMap is a special storage volume, its main function is to store configuration information.

Create configmap.yaml with the following content:

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
  namespace: dev
data:
  info: |
    username:admin
    password:123456

Next, create a configmap with this config file

# 创建configmap
[root@k8s-master01 ~]# kubectl create -f configmap.yaml
configmap/configmap created

# 查看configmap详情
[root@k8s-master01 ~]# kubectl describe cm configmap -n dev
Name:         configmap
Namespace:    dev
Labels:       <none>
Annotations:  <none>

Data
====
info:
----
username:admin
password:123456

Events:  <none>

Next, create a pod-configmap.yaml and mount the configmap created above

apiVersion: v1
kind: Pod
metadata:
  name: pod-configmap
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    volumeMounts: # 将configmap挂载到目录
    - name: config
      mountPath: /configmap/config
  volumes: # 引用configmap
  - name: config
    configMap:
      name: configmap
# 创建pod
[root@k8s-master01 ~]# kubectl create -f pod-configmap.yaml
pod/pod-configmap created

# 查看pod
[root@k8s-master01 ~]# kubectl get pod pod-configmap -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-configmap   1/1     Running   0          6s

#进入容器
[root@k8s-master01 ~]# kubectl exec -it pod-configmap -n dev /bin/sh
# cd /configmap/config/
# ls
info
# more info
username:admin
password:123456

# 可以看到映射已经成功,每个configmap都映射成了一个目录
# key--->文件     value---->文件中的内容
# 此时如果更新configmap的内容, 容器中的值也会动态更新

1.3.2 Secret

In kubernetes, there is also an object very similar to ConfigMap, called Secret object. It is mainly used to store sensitive information such as passwords, keys, certificates, etc.

  1. First encode the data using base64
[root@k8s-master01 ~]# echo -n 'admin' | base64 #准备username
YWRtaW4=
[root@k8s-master01 ~]# echo -n '123456' | base64 #准备password
MTIzNDU2
  1. Next write secret.yaml and create Secret
apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: dev
type: Opaque
data:
  username: YWRtaW4=
  password: MTIzNDU2
# 创建secret
[root@k8s-master01 ~]# kubectl create -f secret.yaml
secret/secret created

# 查看secret详情
[root@k8s-master01 ~]# kubectl describe secret secret -n dev
Name:         secret
Namespace:    dev
Labels:       <none>
Annotations:  <none>
Type:  Opaque
Data
====
password:  6 bytes
username:  5 bytes
  1. Create pod-secret.yaml and mount the secret created above:
apiVersion: v1
kind: Pod
metadata:
  name: pod-secret
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    volumeMounts: # 将secret挂载到目录
    - name: config
      mountPath: /secret/config
  volumes:
  - name: config
    secret:
      secretName: secret
# 创建pod
[root@k8s-master01 ~]# kubectl create -f pod-secret.yaml
pod/pod-secret created

# 查看pod
[root@k8s-master01 ~]# kubectl get pod pod-secret -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-secret      1/1     Running   0          2m28s

# 进入容器,查看secret信息,发现已经自动解码了
[root@k8s-master01 ~]# kubectl exec -it pod-secret /bin/sh -n dev
/ # ls /secret/config/
password  username
/ # more /secret/config/username
admin
/ # more /secret/config/password
123456

So far, the encoding of information using secret has been realized.

Two safety certification

2.1 Overview of Access Control

As a distributed cluster management tool, Kubernetes ensures the security of the cluster is an important task. The so-called security is actually to ensure the authentication and authentication of various clients of Kubernetes .

client

In a Kubernetes cluster, there are usually two types of clients:

  • User Account : Generally, it is a user account managed independently of other services other than kubernetes.
  • Service Account : The account managed by kubernetes is used to provide identity identification for the service process in the Pod when accessing Kubernetes.

insert image description here

Authentication, authorization and access control

ApiServer is the only entry to access and manage resource objects. Any request to access ApiServer must go through the following three processes:

  • Authentication (authentication): identity authentication, only the correct account can pass the authentication
  • Authorization: Determine whether the user has permission to perform specific actions on the accessed resources
  • Admission Control: It is used to supplement the authorization mechanism to achieve more fine-grained access control functions.

insert image description here

2.2 Authentication Management

The most critical point of Kubernetes cluster security is how to identify and authenticate client identities. It provides three client authentication methods:

  • HTTP Base authentication: authentication through username + password

        这种认证方式是把“用户名:密码”用BASE64算法进行编码后的字符串放在HTTP请求中的Header Authorization域里发送给服务端。服务端收到后进行解码,获取用户名及密码,然后进行用户身份认证的过程。
    
  • HTTP Token authentication: identify legitimate users through a Token

        这种认证方式是用一个很长的难以被模仿的字符串--Token来表明客户身份的一种方式。每个Token对应一个用户名,当客户端发起API调用请求时,需要在HTTP Header里放入Token,API Server接到Token后会跟服务器中保存的token进行比对,然后进行用户身份认证的过程。
    
  • HTTPS certificate authentication: two-way digital certificate authentication method based on CA root certificate signature

        这种认证方式是安全性最高的一种方式,但是同时也是操作起来最麻烦的一种方式。
    

insert image description here

HTTPS authentication is roughly divided into three processes:

  1. Certificate application and issuance

      HTTPS通信双方的服务器向CA机构申请证书,CA机构下发根证书、服务端证书及私钥给申请者
    
  2. Two-way authentication between client and server

      1> 客户端向服务器端发起请求,服务端下发自己的证书给客户端,
         客户端接收到证书后,通过私钥解密证书,在证书中获得服务端的公钥,
         客户端利用服务器端的公钥认证证书中的信息,如果一致,则认可这个服务器
      2> 客户端发送自己的证书给服务器端,服务端接收到证书后,通过私钥解密证书,
         在证书中获得客户端的公钥,并用该公钥认证证书信息,确认客户端是否合法
    
  3. Server and client communicate

      服务器端和客户端协商好加密方案后,客户端会产生一个随机的秘钥并加密,然后发送到服务器端。
      服务器端接收这个秘钥后,双方接下来通信的所有内容都通过该随机秘钥加密
    

Note: Kubernetes allows multiple authentication methods to be configured at the same time, as long as any one of them passes the authentication

2.3 Authorization Management

Authorization occurs after successful authentication. Through authentication, we can know who the requesting user is, and then Kubernetes will determine whether the user has permission to access according to the pre-defined authorization strategy. This process is called authorization.

Each request sent to ApiServer carries user and resource information: such as the user who sent the request, the path of the request, the action of the request, etc. Authorization is to compare the information with the authorization policy, and if it conforms to the policy, it is considered authorized Pass, otherwise an error is returned.

API Server currently supports the following authorization strategies:

  • AlwaysDeny: Indicates that all requests are rejected, generally used for testing
  • AlwaysAllow: All requests are allowed to be received, which is equivalent to the cluster not requiring an authorization process (Kubernetes default policy)
  • ABAC: Attribute-based access control, which means matching and controlling user requests using user-configured authorization rules
  • Webhook: Authorize users by calling external REST services
  • Node: is a dedicated mode for access control of requests made by kubelet
  • RBAC: role-based access control (default option under kubeadm installation)

RBAC (Role-Based Access Control) role-based access control is mainly describing one thing: what permissions are granted to which objects

It involves the following concepts:

  • Objects: Users, Groups, ServiceAccounts
  • Role: represents a set of operational actions (permissions) defined on resources
  • Binding: Bind the defined role to the user

insert image description here

RBAC introduces 4 top-level resource objects:

  • Role, ClusterRole: role, used to specify a set of permissions
  • RoleBinding, ClusterRoleBinding: Role binding, used to assign roles (permissions) to objects

Role、ClusterRole

A role is a collection of permissions, where permissions are in the form of permission (white list).

# Role只能对命名空间内的资源进行授权,需要指定nameapce
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: dev
  name: authorization-role
rules:
- apiGroups: [""]  # 支持的API组列表,"" 空字符串,表示核心API群
  resources: ["pods"] # 支持的资源对象列表
  verbs: ["get", "watch", "list"] # 允许的对资源对象的操作方法列表
# ClusterRole可以对集群范围内资源、跨namespaces的范围资源、非资源类型进行授权
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
 name: authorization-clusterrole
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

What needs to be explained in detail is that the parameters in the rules:

  • apiGroups: list of supported API groups

    "","apps", "autoscaling", "batch"
    
  • resources: list of supported resource objects

    "services", "endpoints", "pods","secrets","configmaps","crontabs","deployments","jobs",
    "nodes","rolebindings","clusterroles","daemonsets","replicasets","statefulsets",
    "horizontalpodautoscalers","replicationcontrollers","cronjobs"
    
  • verbs: list of operation methods on resource objects

    "get", "list", "watch", "create", "update", "patch", "delete", "exec"
    

RoleBinding、ClusterRoleBinding

Role binding is used to bind a role to a target object, the binding target can be User, Group or ServiceAccount.

# RoleBinding可以将同一namespace中的subject绑定到某个Role下,则此subject即具有该Role定义的权限
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: authorization-role-binding
  namespace: dev
subjects:
- kind: User
  name: heima
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: authorization-role
  apiGroup: rbac.authorization.k8s.io
# ClusterRoleBinding在整个集群级别和所有namespaces将特定的subject与ClusterRole绑定,授予权限
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
 name: authorization-clusterrole-binding
subjects:
- kind: User
  name: heima
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: authorization-clusterrole
  apiGroup: rbac.authorization.k8s.io

RoleBinding references ClusterRole for authorization

RoleBinding can refer to ClusterRole to authorize resource principals defined by ClusterRole in the same namespace.

    一种很常用的做法就是,集群管理员为集群范围预定义好一组角色(ClusterRole),然后在多个命名空间中重复使用这些ClusterRole。这样可以大幅提高授权管理工作效率,也使得各个命名空间下的基础性授权规则与使用体验保持一致。
# 虽然authorization-clusterrole是一个集群角色,但是因为使用了RoleBinding
# 所以heima只能读取dev命名空间中的资源
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: authorization-role-binding-ns
  namespace: dev
subjects:
- kind: User
  name: heima
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: authorization-clusterrole
  apiGroup: rbac.authorization.k8s.io

Actual combat: Create an account that can only manage Pods resources in the dev space

  1. Create an account
# 1) 创建证书
[root@k8s-master01 pki]# cd /etc/kubernetes/pki/
[root@k8s-master01 pki]# (umask 077;openssl genrsa -out devman.key 2048)

# 2) 用apiserver的证书去签署
# 2-1) 签名申请,申请的用户是devman,组是devgroup
[root@k8s-master01 pki]# openssl req -new -key devman.key -out devman.csr -subj "/CN=devman/O=devgroup"     
# 2-2) 签署证书
[root@k8s-master01 pki]# openssl x509 -req -in devman.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out devman.crt -days 3650

# 3) 设置集群、用户、上下文信息
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes --embed-certs=true --certificate-authority=/etc/kubernetes/pki/ca.crt --server=https://192.168.109.100:6443

[root@k8s-master01 pki]# kubectl config set-credentials devman --embed-certs=true --client-certificate=/etc/kubernetes/pki/devman.crt --client-key=/etc/kubernetes/pki/devman.key

[root@k8s-master01 pki]# kubectl config set-context devman@kubernetes --cluster=kubernetes --user=devman

# 切换账户到devman
[root@k8s-master01 pki]# kubectl config use-context devman@kubernetes
Switched to context "devman@kubernetes".

# 查看dev下pod,发现没有权限
[root@k8s-master01 pki]# kubectl get pods -n dev
Error from server (Forbidden): pods is forbidden: User "devman" cannot list resource "pods" in API group "" in the namespace "dev"

# 切换到admin账户
[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

2) Create Role and RoleBinding to authorize devman users

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: dev
  name: dev-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
  
---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: authorization-role-binding
  namespace: dev
subjects:
- kind: User
  name: devman
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: dev-role
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master01 pki]# kubectl create -f dev-role.yaml
role.rbac.authorization.k8s.io/dev-role created
rolebinding.rbac.authorization.k8s.io/authorization-role-binding created
  1. Switch account and verify again
# 切换账户到devman
[root@k8s-master01 pki]# kubectl config use-context devman@kubernetes
Switched to context "devman@kubernetes".

# 再次查看
[root@k8s-master01 pki]# kubectl get pods -n dev
NAME                                 READY   STATUS             RESTARTS   AGE
nginx-deployment-66cb59b984-8wp2k    1/1     Running            0          4d1h
nginx-deployment-66cb59b984-dc46j    1/1     Running            0          4d1h
nginx-deployment-66cb59b984-thfck    1/1     Running            0          4d1h

# 为了不影响后面的学习,切回admin账户
[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

2.4 Admission Control

After passing the previous authentication and authorization, the apiserver will process the request only after the access control process is passed.

Admission control is a configurable list of controllers, which admission controllers can be selected to be executed by setting the command line on the Api-Server:

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,
                      DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds

Only when all the admission controllers are checked, the apiserver executes the request, otherwise it returns a rejection.

The currently configurable Admission Control is as follows:

  • AlwaysAdmit: Allow all requests
  • AlwaysDeny: prohibit all requests, generally used for testing
  • AlwaysPullImages: Always download the image before starting the container
  • DenyExecOnPrivileged: It will intercept all requests to execute commands on the Privileged Container
  • ImagePolicyWebhook: This plugin will allow a Webhook program in the backend to perform the functions of the admission controller.
  • Service Account: Implement ServiceAccount to realize automation
  • SecurityContextDeny: This plugin will invalidate all definitions in Pods using SecurityContext
  • ResourceQuota: For resource quota management purposes, observe all requests to ensure that the quota on the namespace will not exceed the standard
  • LimitRanger: used for resource limit management, acting on the namespace, to ensure the resource limit of the Pod
  • InitialResources: For Pods that have not set resource requests and limits, set them according to the usage of their mirrored historical resources
  • NamespaceLifecycle: If you try to create a resource object in a namespace that does not exist, the creation request will be rejected. When deleting a namespace, the system will delete all objects in the namespace.
  • DefaultStorageClass: In order to realize the dynamic provision of shared storage, try to match the default StorageClass for PVCs that do not specify StorageClass or PV, and minimize the back-end storage details that users need to know when applying for PVCs
  • DefaultTolerationSeconds: This plugin sets the default "tolerance" time for those Pods that do not set forgiveness tolerations and have two taints: notready: NoExecute and unreachable: NoExecute, which is 5 minutes
  • PodSecurityPolicy: This plugin is used to determine whether to control the Pod's security policy based on the Pod's security context and available PodSecurityPolicy when creating or modifying the Pod

Three DashBoard

Everything that was previously done in kubernetes was done with the command line tool kubectl. In fact, in order to provide a richer user experience, kubernetes has also developed a web-based user interface (Dashboard). Users can use Dashboard to deploy containerized applications, monitor application status, perform troubleshooting and manage various resources in kubernetes.

3.1 Deploy Dashboard

  1. Download yaml and run Dashboard
# 下载yaml
[root@k8s-master01 ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

# 修改kubernetes-dashboard的Service类型
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30009  # 新增
  selector:
    k8s-app: kubernetes-dashboard

# 部署
[root@k8s-master01 ~]# kubectl create -f recommended.yaml

# 查看namespace下的kubernetes-dashboard下的资源
[root@k8s-master01 ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                            READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-c79c65bb7-zwfvw   1/1     Running   0          111s
pod/kubernetes-dashboard-56484d4c5-z95z5        1/1     Running   0          111s

NAME                               TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
service/dashboard-metrics-scraper  ClusterIP  10.96.89.218    <none>       8000/TCP        111s
service/kubernetes-dashboard       NodePort   10.104.178.171  <none>       443:30009/TCP   111s

2) Create an access account and get token

# 创建账号
[root@k8s-master01-1 ~]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

# 授权
[root@k8s-master01-1 ~]# kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin

# 获取账号token
[root@k8s-master01 ~]#  kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-xbqhh        kubernetes.io/service-account-token   3      2m35s

[root@k8s-master01 ~]# kubectl describe secrets dashboard-admin-token-xbqhh -n kubernetes-dashboard
Name:         dashboard-admin-token-xbqhh
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 95d84d80-be7a-4d10-a2e0-68f90222d039

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImJrYkF4bW5XcDhWcmNGUGJtek5NODFuSXl1aWptMmU2M3o4LTY5a2FKS2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teGJxaGgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTVkODRkODAtYmU3YS00ZDEwLWEyZTAtNjhmOTAyMjJkMDM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.NAl7e8ZfWWdDoPxkqzJzTB46sK9E8iuJYnUI9vnBaY3Jts7T1g1msjsBnbxzQSYgAG--cV0WYxjndzJY_UWCwaGPrQrt_GunxmOK9AUnzURqm55GR2RXIZtjsWVP2EBatsDgHRmuUbQvTFOvdJB4x3nXcYLN2opAaMqg3rnU2rr-A8zCrIuX_eca12wIp_QiuP3SF-tzpdLpsyRfegTJZl6YnSGyaVkC9id-cxZRb307qdCfXPfCHR_2rt5FVfxARgg_C0e3eFHaaYQO7CitxsnIoIXpOFNAR8aUrmopJyODQIPqBWUehb7FhlU1DCduHnIIXVC_UICZ-MKYewBDLw
ca.crt:     1025 bytes

3) Access the Dashboard UI through a browser

Enter the above token on the login page

insert image description here

The following page appears to represent success

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-Xyj1Gu5n-1640144981825)(Kubenetes.assets/image-20200520144959353.png)]

3.2 Using DashBoard

This chapter takes Deployment as an example to demonstrate the use of DashBoard

Check

Select the specified namespace dev, and then click Deploymentsto view all deployments under the dev space

insert image description here

expansion and contraction

DeploymentClick on , 规模then specify 目标副本数量, click OK

insert image description here

edit

DeploymentClick on , 编辑then modify yaml文件, click OK

insert image description here

View Pods

Click Podsto view pods list

insert image description here

Operate Pods

Select a Pod, you can execute logs (logs), enter execution (exec), edit, and delete operations on it

insert image description here

Dashboard provides most of the functions of kubectl, which will not be demonstrated here

おすすめ

転載: blog.csdn.net/qq_33417321/article/details/122140060