Kubernetes PV/PVC usage practice

Reprinted from https://www.cnblogs.com/ericnie/p/7733281.html

 

The concept of pv and pvc is not explained. PV and PVC have been used in the registry before. Now I want to put the logs in WebLogic Server into external storage. The process is as follows:

First create a folder on the target node where the logs need to be placed, such as /k8s/weblogic

Add an nfs mount point to /etc/exports

[root@k8s-node-1 weblogic]# cat /etc/exports
/k8s/test *(insecure,rw,async,no_root_squash)
/k8s/weblogic  *(insecure,rw,async,no_root_squash)

restart nfs

service nfs restart

Whether nfs is successful can be verified by the following command.

mount -t nfs -o rw 192.168.0.103:/k8s/weblogic  /mnt/nfs

 

build a pv

copy code
[root@k8s-master pv]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv0003
spec:
    capacity:
      storage: 5Gi
    accessModes:
      - ReadWriteOnce
    persistentVolumeReclaimPolicy: Recycle
    nfs:
      path: /k8s/weblogic
      server: 192.168.0.103
copy code

Build another pvc

copy code
[root@k8s-master pv]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: weblogiclogs
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
copy code

Check whether it is associated with get pv

copy code
[root@k8s-master pv]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                  REASON    AGE
pv0003    5Gi        RWO           Recycle         Bound     default/weblogiclogs             41m
pv01      20Gi       RWX           Recycle         Bound     default/myclaim2                 36d
copy code

There is a problem here. The pv life and death claim created at the beginning is empty. Looking at the configuration of pv and pvc, it is found that there is no relationship in name. I continue to study and find that the matching is purely based on the storage size. Before, because of copying the book, one was 5G, one is 8G, so it cannot be matched, and the modification is successful.

The last is the configuration of weblogic's rc.

copy code
[root@k8s-master pv]# cat rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: helloworld-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        weblogic-app: "helloworld"
        version: "0.1"
    spec:
      containers:
      - name: weblogichelloworld
        image: 1213-helloworld:v1
        volumeMounts:
        - mountPath: "/u01/oracle/user_projects/domains/base_domain/servers/AdminServer/logs"
          name: mypd
        ports:
        - containerPort: 7001
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: weblogiclogs
---
apiVersion: v1
kind: Service
metadata:
  name: helloworldsvc
  labels:
    weblogic-app: helloworld
spec:
  type: NodePort
  ports:
  - port: 7001
    protocol: TCP
    targetPort: 7001
    name: http
    nodePort: 30005
  selector:
    weblogic-app: helloworld
copy code

There is another problem here. The previous idea was to put everything under Servers/AdminServer in pv, but later found that the pod did not start. After being modified to the path of the last log, the startup is successful.

Then we also saw a full screen of logs in pv.

 

==========================================================================

It is worth discussing that this PV/PVC mode is not a good way to store WebLogic logs, because if RC is extended to multiple WebLogic Pods, it means that multiple AdminServers need to access and read and write the same directory and the same file , because they are all AdminServers, they will inevitably cause file damage, so the above is just a verification. It is more appropriate to collect logs through the officially recommended EFK method. On the one hand, it can record the application log, and on the other hand, it can also record the Pod name. But what scenarios PV/PVC is most suitable for still needs to be discussed further.

The concept of pv and pvc is not explained. PV and PVC have been used in the registry before. Now I want to put the logs in WebLogic Server into external storage. The process is as follows:

First create a folder on the target node where the logs need to be placed, such as /k8s/weblogic

Add an nfs mount point to /etc/exports

[root@k8s-node-1 weblogic]# cat /etc/exports
/k8s/test *(insecure,rw,async,no_root_squash)
/k8s/weblogic  *(insecure,rw,async,no_root_squash)

restart nfs

service nfs restart

Whether nfs is successful can be verified by the following command.

mount -t nfs -o rw 192.168.0.103:/k8s/weblogic  /mnt/nfs

 

build a pv

copy code
[root@k8s-master pv]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv0003
spec:
    capacity:
      storage: 5Gi
    accessModes:
      - ReadWriteOnce
    persistentVolumeReclaimPolicy: Recycle
    nfs:
      path: /k8s/weblogic
      server: 192.168.0.103
copy code

Build another pvc

copy code
[root@k8s-master pv]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: weblogiclogs
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
copy code

Check whether it is associated with get pv

copy code
[root@k8s-master pv]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                  REASON    AGE
pv0003    5Gi        RWO           Recycle         Bound     default/weblogiclogs             41m
pv01      20Gi       RWX           Recycle         Bound     default/myclaim2                 36d
copy code

There is a problem here. The pv life and death claim created at the beginning is empty. Looking at the configuration of pv and pvc, it is found that there is no relationship in name. I continue to study and find that the matching is purely based on the storage size. Before, because of copying the book, one was 5G, one is 8G, so it cannot be matched, and the modification is successful.

The last is the configuration of weblogic's rc.

copy code
[root@k8s-master pv]# cat rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: helloworld-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        weblogic-app: "helloworld"
        version: "0.1"
    spec:
      containers:
      - name: weblogichelloworld
        image: 1213-helloworld:v1
        volumeMounts:
        - mountPath: "/u01/oracle/user_projects/domains/base_domain/servers/AdminServer/logs"
          name: mypd
        ports:
        - containerPort: 7001
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: weblogiclogs
---
apiVersion: v1
kind: Service
metadata:
  name: helloworldsvc
  labels:
    weblogic-app: helloworld
spec:
  type: NodePort
  ports:
  - port: 7001
    protocol: TCP
    targetPort: 7001
    name: http
    nodePort: 30005
  selector:
    weblogic-app: helloworld
copy code

There is another problem here. The previous idea was to put everything under Servers/AdminServer in pv, but later found that the pod did not start. After being modified to the path of the last log, the startup is successful.

Then we also saw a full screen of logs in pv.

 

==========================================================================

It is worth discussing that this PV/PVC mode is not a good way to store WebLogic logs, because if RC is extended to multiple WebLogic Pods, it means that multiple AdminServers need to access and read and write the same directory and the same file , because they are all AdminServers, they will inevitably cause file damage, so the above is just a verification. It is more appropriate to collect logs through the officially recommended EFK method. On the one hand, it can record the application log, and on the other hand, it can also record the Pod name. But what scenarios PV/PVC is most suitable for still needs to be discussed further.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325234456&siteId=291194637