How to use Init Container in Kubernetes

A Pod can contain multiple containers, and applications run in these containers. At the same time, a Pod can also have one or more Init containers that are started before the application container.

1. What is Init Container?

Init Container is a special container. As the name suggests, it is a container used for initialization work. It can be one or more. If there are multiple, these containers will be executed in the defined order. Only after all Init Containers are executed, The main container will be started.

We know that all containers in a Pod share data volumes and network namespaces, so the data generated in the Init Container can be used by the main container. Init Containers are essentially the same as application containers, except for the following two points:

  1. Init Container does not support lifecycle, livenessProbe, readinessProbe and startupProbe, because they must run to completion before the Pod is ready, so they are tasks that only run once and end.

  2. The system must execute successfully before the system can continue to execute the next container.

If a Pod's Init container fails, Kubernetes will continue to restart the Pod until the Init container succeeds. If a Pod's corresponding restartPolicy is Never, it will not be restarted.

Pod life cycle:

From the picture above, we can intuitively see that the Init Container is independent of the main container, but they all belong to the Pod life cycle.

2. Application scenarios

  • Wait for other associated services to run correctly (such as a database or some background service)

  • Generate the configuration files required for the service based on environment variables or configuration templates

  • Get the required local configuration from a remote database, or register yourself in a central database

  • Download relevant dependency packages, or perform some pre-configuration operations on the system

3. Simple example

The application container is defined in Pod.Spec.Containers and is a required field, while init is defined in Pod.Spec.initContainers and is an optional field.

The following example defines a simple Pod with 2 Init containers. The first one waits for myservice to start, and the second one waits for mydb to start. Once both Init containers are started, the Pod will start the application container in the spec section.

myapp.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app.kubernetes.io/name: MyApp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]

create:

[root@localhost ~]# kubectl apply -f myapp.yaml
pod/myapp-pod created

View status:

[root@localhost ~]# kubectl get -f myapp.yaml    
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:0/2   0          8s

Output details:

[root@localhost ~]# kubectl describe -f myapp.yaml  
Name:         myapp-pod
Namespace:    default
[...]
Labels:       app.kubernetes.io/name=MyApp
Annotations:  <none>
Status:       Pending
[...]
Init Containers:
  init-myservice:
[...]
    State:          Running
[...]
  init-mydb:
[...]
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
[...]
Containers:
  myapp-container:
[...]
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
[...]
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  20s   default-scheduler  Successfully assigned default/myapp-pod to localhost.localdomain
  Normal  Pulling    17s   kubelet            Pulling image "busybox:1.28"
  Normal  Pulled     8s    kubelet            Successfully pulled image "busybox:1.28" in 9.30472043s
  Normal  Created    7s    kubelet            Created container init-myservice
  Normal  Started    6s    kubelet            Started container init-myservice

View the logs of the Init container in the Pod:

[root@localhost ~]# kubectl logs myapp-pod -c init-myservice   # 查看第一个 Init 容器
nslookup: can't resolve 'myservice.default.svc.cluster.local'
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
waiting for myservice
 
 
 
[root@localhost ~]# kubectl logs myapp-pod -c init-mydb     # 查看第二个 Init 容器
Error from server (BadRequest): container "init-mydb" in pod "myapp-pod" is waiting to start: PodInitializing

At this time, the init-mydb container will wait for init-myservice to complete before executing it. The following is the configuration file for creating these Services: services.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
---
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377

create:

[root@localhost ~]# kubectl apply -f services.yaml
service/myservice created
service/mydb created

Check the status again: it has changed to Running

[root@localhost ~]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          2m35s

At this time, check the detailed information again and find that two init-myservice and init-mydb have been Terminated.

Init Containers:
  init-myservice:
[...]
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
[...]
  init-mydb:
[...]
    State:          Terminated
      Reason:       Completed
    Exit Code:    0

4. Sidecar new features

With the release of Kubernetes 1.28, it supports many blockbuster features, the most impressive of which is the new Sidecar, which is currently in alpha version. The previous name of Sidecar was just a multi-container design pattern, which is no different from ordinary containers in K8s's view. However, because its life cycle is not consistent with the business container, life cycle management of Sidecar has always been a problem.

The new version of Sidecar is placed in initContainers. Specify restartPolicy as Always to start Sidecar. Its life cycle and restart management are the same as ordinary containers. This feature can also be used to run Job.

The following is an example of a Deployment with Sidecar. The log Sidecar container is used to output logs to the terminal, and the main container simulates writing logs: sidecar.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: alpine:latest
          command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done']
          volumeMounts:
            - name: data
              mountPath: /opt
      initContainers:
        - name: logshipper # sidecar 容器
          image: alpine:latest
          restartPolicy: Always # 必须指定restartPolicy为Always才能开启sidecar
          command: ['sh', '-c', 'tail -f /opt/logs.txt']
          volumeMounts:
            - name: data
              mountPath: /opt
      volumes:
        - name: data
          emptyDir: {}

Deployed to the K8s cluster, you can see the initContainers[*].restartPolicy field

[root@localhost ~]# kubectl create -f sidecar.yaml
deployment.apps/myapp created
 
[root@localhost ~]# kubectl get po -l app=myapp -ojsonpath='{.items[0].spec.initContainers[0].restartPolicy}'
Always
 
[root@localhost ~]# kubectl get po  -l app=myapp 
NAME                    READY   STATUS    RESTARTS   AGE
myapp-215h3248d-p4z6   2/2     Running   0          1m5s

Both containers in myapp Pod are Ready (2/2). Looking at the logs, you can see that the log Sidecar has been outputting logs.

[root@localhost ~]# kubectl logs -l app=myapp -c logshipper -f
logging
logging

Guess you like

Origin blog.csdn.net/zfw_666666/article/details/133928237