PodSecurityPolicy of other controllers in kubernetes

PodSecurityPolicy of other controllers in kubernetes

PodSecurityPolicy is a cluster-level Pod security policy, which automatically sets the Security Context for the Pod and Volume in the cluster.

The Admission Controller intercepts requests to kube-apiserver. The interception occurs before the requested object is persisted, but after the request is verified and authorized. This way we can check the source of the requested object and verify that the required content is correct. The admission controller is enabled by adding them to the --enable-admission-plugins parameter of kube-apiserver. So if we want to use PSP, we need to add parameters in kube-apiserver, as follows:

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodSecurityPolicy
其他插件是kubernetes官方推荐的插件。

Then restart kube-apiserver:

# systemctl daemon-reload

# systemctl restart kube-apiserver

At this time, the PSP controller has been started. If we create a Pod now, try:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        imagePullPolicy: IfNotPresent

Then when we kubectl get pod, we found that there was no pod. Let's check the status of deploy, as follows:

# kubectl get deployments.

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE

nginx                  0/1     0            0           117s

# kubectl get replicasets.

NAME                              DESIRED   CURRENT   READY   AGE

nginx-55fc968d9                   1         0         0       2m17s

We see that the replicaset and deploy are successfully created, but the replicaset did not create a pod. This is because our cluster currently lacks a security policy, so creating a new pod will not succeed. At this time, we need to use ServiceAccount.

Under normal circumstances, we do not create Pods directly, but create Pods through other controllers such as Deployment, StatefulSet, etc. If we want to use PSP now, we need to configure kube-controller-manager to use ServiceAccount separately for each controller it contains. We can add it through the following parameters, as follows:

--use-service-account-credentials=true

Then restart the controller-manager:

# systemctl daemon-reload

# systemctl restart kube-controller-manager

Then kubenetes will automatically generate the following SAs, which specify which controllers can parse which policies:

# kubectl get serviceaccount -n kube-system | egrep -o '[A-Za-z0-9-]+-controller'

attachdetach-controller

certificate-controller

clusterrole-aggregation-controller

cronjob-controller

daemon-set-controller

deployment-controller

disruption-controller

endpoint-controller

expand-controller

job-controller

namespace-controller

node-controller

pv-protection-controller

pvc-protection-controller

replicaset-controller

replication-controller

resourcequota-controller

service-account-controller

service-controller

statefulset-controller

traefik-ingress-controller

ttl-controller

PSP provides a declarative way to express the content created by running users and ServiceAccount in our cluster. The main strategies are as follows:
PodSecurityPolicy of other controllers in kubernetes
In the above example, we need two strategies:
1. Provide a default strategy for restricting access to ensure that Pod cannot be created with privilege settings;
2. Elevate the permission policy to allow privilege settings to be used for certain Pods , Such as allowing Pod to be created in a specific namespace;

First, create a default policy:
psp-restrictive.yaml

apiVersion: policy/v1beta1

kind: PodSecurityPolicy

metadata:

  name: restrictive

spec:

  privileged: false

  hostNetwork: false

  allowPrivilegeEscalation: false

  defaultAllowPrivilegeEscalation: false

  hostPID: false

  hostIPC: false

  runAsUser:

    rule: RunAsAny

  fsGroup:

    rule: RunAsAny

  seLinux:

    rule: RunAsAny

  supplementalGroups:

    rule: RunAsAny

  volumes:

  - 'configMap'

  - 'downwardAPI'

  - 'emptyDir'

  - 'persistentVolumeClaim'

  - 'secret'

  - 'projected'

  allowedCapabilities:

  - '*'

Then create this PSP object directly:

# kubectl apply -f psp-restrictive.yaml

podsecuritypolicy.policy/restrictive created

# kubectl get psp

NAME          PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES

restrictive   false   *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,projected

Second, create an elevation strategy for those Pods that need to elevate permissions, such as kube-proxy, which requires hostNetwork permissions:
psp-permissive.yaml

apiVersion: policy/v1beta1

kind: PodSecurityPolicy

metadata:

  name: permissive

spec:

  privileged: true

  hostNetwork: true

  hostIPC: true

  hostPID: true

  seLinux:

    rule: RunAsAny

  supplementalGroups:

    rule: RunAsAny

  runAsUser:

    rule: RunAsAny

  fsGroup:

    rule: RunAsAny

  hostPorts:

  - min: 0

    max: 65535

  volumes:

  - '*'

Then create this PSP object:

# kubectl apply -f psp-permissive.yaml

podsecuritypolicy.policy/permissive created

# kubectl get psp

NAME          PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES

permissive    true           RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *

restrictive   false   *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,projected

But just deploying these two strategies is not enough. We use RBAC for authorization, otherwise our Pod will still not be created successfully. RBAC determines a policy that ServiceAccount can use. If you use ClusterRoleBinding, you can provide restrictive access to ServiceAccount, and if you use RoleBinding, you can provide access to a virtual policy for SeriveAccount.

First create a ClusterRole that allows the use of restrictive policies, and then create a ClusterRoleBinding to bind the ServiceAccounts of all controllers:
psp-restrictive-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: psp-restrictive

rules:

- apiGroups:

  - extensions

  resources:

  - podsecruritypolicies

  resourceNames:

  - restrictive

  verbs:

  - use

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: psp-default

subjects:

- kind: Group

  name: system:serviceaccounts

  namespace: kube-system

roleRef:

  kind: ClusterRole

  name: psp-restrictive

  apiGroup: rbac.authorization.k8s.io

Then create an RBAC resource object:

# kubectl apply -f  psp-restrictive-rbac.yaml

clusterrole.rbac.authorization.k8s.io/psp-restrictive created

clusterrolebinding.rbac.authorization.k8s.io/psp-default created

Then we can see that the Pod that was just created outside can now be created:

# kubectl get pod

NAME                                    READY   STATUS    RESTARTS   AGE

nginx-55fc968d9-qn2vr                   1/1     Running   0          17m

But if we now add a hostNetwork=true privilege to the Deployment list, observe whether the Pod can be created:
nginx-deploy.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        imagePullPolicy: IfNotPresent

      hostNetwork: true

Then we create this Deployment again:

# kubectl apply -f nginx-deploy.yaml

deployment.apps/nginx created

# kubectl get pod

NAME                                    READY   STATUS    RESTARTS   AGE

# kubectl get deployments.

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE

nginx                  0/1     0            0           12s

# kubectl get replicasets.

NAME                              DESIRED   CURRENT   READY   AGE

nginx-5cd65fd4c6                  1         0         0       18s

We can see that the Pod has not been created. We describe a replicaset and find the following log:

# kubectl describe rs nginx-5cd65fd4c6

......

Events:

  Type     Reason        Age                   From                   Message

  ----     ------        ----                  ----                   -------

  Warning  FailedCreate  41s (x16 over 3m24s)  replicaset-controller  Error creating: pods "nginx-5cd65fd4c6-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]

Prompt us that hostNetwork is not allowed to be used.

But in some cases, we need to use privileges under a certain namespace, then we can create a ClusterRole that allows the use of privileges, but here we set RoleBinding for a specific ServiceAccount, as follows:
psp-permissive-rbac.yaml

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: psp-permissive

rules:

- apiGroups:

  - extensions

  resources:

  - podsecuritypolicies

  resourceNames:

  - permissive

  verbs:

  - use

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: RoleBinding

metadata:

  name: psp-permissive

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: psp-permissive

subjects:

- kind: ServiceAccount

  name: daemon-set-controller

  namespace: kube-system

- kind: ServiceAccount

  name: replicaset-controller

  namespace: kube-system

- kind: ServiceAccount

  name: job-controller

  namespace: kube-system

The above defines the privileges to create Pod for daemonset, replicaset, and job in kube-system.
Then we create a list of RBAC resources:

# kubectl apply -f psp-permissive-rbac.yaml

clusterrole.rbac.authorization.k8s.io/psp-permissive created

rolebinding.rbac.authorization.k8s.io/psp-permissive created

Now we define a test Deployment:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

  namespace: kube-system

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        imagePullPolicy: IfNotPresent

      hostNetwork: true

Then create the resource object:

# kubectl apply -f nginx-deploy.yaml

deployment.apps/nginx created

[root@ecs-5704-0003 kubernetes]# kubectl get pod -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE

......

nginx-5cd65fd4c6-7pn9z                 1/1     Running   0          4s

Then we can see that the Pod can be created normally.

In addition, there is a special requirement, that is, only a certain application can use privileges in a certain command space, so for this type of application, you need to create a separate SA, and then mirror the RoleBinding with the permissive policy, as follows:
(1) Create Can use privileged SA

# kubectl create serviceaccount specialsa

serviceaccount/specialsa created

(2) Create RoleBinding
specialsa-psp.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: RoleBinding

metadata:

  name: specialsa-psp-permissive

  namespace: default

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: psp-permissive

subjects:

- kind: ServiceAccount

  name: specialsa

  namespace: default

Then create the RoleBinding object above:

# kubectl apply -f specialsa-psp.yaml

rolebinding.rbac.authorization.k8s.io/specialsa-psp-permissive created

Then create a test deployment:
ng-deploy.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-hostnetwork-deploy

  namespace: default

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx

        imagePullPolicy: IfNotPresent

      hostNetwork: true

      serviceAccount: specialsa  # 注意这里使用的sa的权限绑定

Then create the resource object:

# kubectl apply -f ng-deploy.yaml

deployment.apps/nginx-hostnetwork-deploy created

# kubectl get pod

NAME                                        READY   STATUS    RESTARTS   AGE

nginx-hostnetwork-deploy-7b65cf7bbd-g5wl5   1/1     Running   0          2s

Then you can find that you can create a privileged Pod under the default namespace.

Finish


Guess you like

Origin blog.51cto.com/15080014/2654485