After using crio as runtime, start the container will not depend docker related components, container process more concise. Crio use the following information as a runtime start the process a nginx as follows: root process (1) -> conmon-> nginx . conmon and acting between crio runc (OCI implemented), after starting container for hosting crio container, see more conmon
root 15586 1 0 16:49 ? 00:00:00 /usr/local/bin/conmon --syslog -c a4f089f6b251c6269e2f79c41cec0317f4a65729b6075c77bbf4337206050501 -n k8s_nginx-test_nginx-test-24cjg_default_55bbcfe7-d63c-468b-bbcc-35a8b6c71eb9 root 15609 15586 0 16:49 ? 00:00:00 nginx: master process nginx -g daemon off;
AnSo minikube
Installation cri-o ( steps from official documents):
- Dependent libraries
yum install -y \ btrfs-progs-devel \ containers-common \ device-mapper-devel \ git \ glib2-devel \ glibc-devel \ glibc-static \ go \ gpgme-devel \ libassuan-devel \ libgpg-error-devel \ libseccomp-devel \ libselinux-devel \ pkgconfig \ runc
- Compile CRI-O, can be specified at compile CRI-O when the Build-Tag . The current CRI-O needs to compile golang 12.x version
git clone https://github.com/cri-o/cri-o # or your fork cd cri-o make sudo make install
- Compile Conmon
git clone https://github.com/containers/conmon cd conmon make sudo make install
cRIO default profile /etc/crio/crio.conf , can command crio config --default> /etc/crio/crio.conf to generate the default profile.
Set CNI network (steps from official documents)
git clone https://github.com/containernetworking/plugins cd plugins git checkout v0.8.1 ./build_linux.sh # or build_windows.sh sudo mkdir -p /opt/cni/bin sudo cp bin/* /opt/cni/bin/
- After compiling the CNI, copy / opt / cni / bin in the binary file to /etc/crio/crio.conf the crio.network.plugin_dir directory, the default is / usr / libexec / cni ; cni and arranged into crio .network.network_dir directory
Start CRI-O
- Perform the following steps in the cri-o source directory, starting CRI-O
sudo make install.systemd sudo systemctl daemon-reload sudo systemctl enable crio sudo systemctl start crio
Use crio-status command
- Use crio-status config to view the configuration of the current crio
Installation CRI-O command-line tool crictl
- crictl usage is similar to the docker command, can be found in the official documentation
# go get github.com/kubernetes-sigs/cri-tools/cmd/crictl
# cp /root/go/bin/crictl /usr/local/bin
# crictl --runtime-endpoint unix:///var/run/crio/crio.sock version Version: 0.1.0 RuntimeName: cri-o RuntimeVersion: 1.15.1-dev RuntimeApiVersion: v1alpha1
- crictl default read /etc/crictl.yaml Runtime configuration-Endpoint
# cat /etc/crictl.yaml runtime-endpoint: unix:///var/run/crio/crio.sock image-endpoint: unix:///var/run/crio/crio.sock
Start minikube and configure using the CRI-O
minikube start --container-runtime=cri-o --vm-driver=none
crictl simple to use
After starting minikube start related components pod, use circtl ps can see information related to the container, the last column POD ID. See more crictl
[root@iZj6cid8uez7g44i1t0k7tZ net.d]# crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b69e8be1ef2b0 gcr.io/k8s-minikube/storage-provisioner@sha256:088daa9fcbccf04c3f415d77d5a6360d2803922190b675cb7fc88a9d2d91985a About an hour ago Running storage-provisioner 0 282d5beebf847 dd57045952649 bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b About an hour ago Running coredns 0 4f7a8f3cac5c4 a9df5247ede0f bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b About an hour ago Running coredns 0 6448effa2f7cd dc1027c8d94c5 c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384 About an hour ago Running kube-proxy 0 0436f736f2a4a 25cb103bc2e1e k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 About an hour ago Running kube-addon-manager 0 85ceee77c5c70 cf7378a82993d 301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a About an hour ago Running kube-scheduler 0 baf3c10a81831 60d9bcf7a4b83 06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d About an hour ago Running kube-controller-manager 0 877a92f202a5f 7a67b324cd8c7 b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed About an hour ago Running etcd 0 74fe384e1645b 355ba11ac783f b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e About an hour ago Running kube-apiserver 0 d112f1dc64113
Use crictl inspect CONTAINER_ID can view the details of the container, using circtl inspectp POD_ID view details of the pod. crictl inspect CONTAINER_ID | grep sandboxId out of the vessel is a value corresponding to the pod POD_ID.
Use flannel plug-in configuration
Start minikube used as follows
minikube start \ --extra-config=controller-manager.allocate-node-cidrs=true \ --extra-config=controller-manager.cluster-cidr=10.233.64.0/18 \ --extra-config=kubelet.network-plugin=cni \ --extra-config=kubelet.pod-cidr=10.233.64.0/18 \ --network-plugin=cni \ --container-runtime=cri-o \ --vm-driver=none
According to an official order to install the plug-in flannel, and see if coredns normal start (coredns in before cni start pending state). Before executing the following command need to make sure /etc/cni/net.d/ is empty (flannel auto-generated) or configuration file correctly, or there will be an error
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
If coredns following error, stating profile version /etc/cni/net.d/ fields in error, you can refer flannel official configuration , will modify cniVersion field is "0.3.1", so coredns would normally start later. Use crictl inspectp POD_ID see minikube network parameter set to start --extra-config kubelet.pod-cidr = specified value
cannot convert version ["" "0.1.0" "0.2.0"] to 0.4.0
After the normal start can /run/flannel/subnet.env see flannel information configuration,
# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.233.64.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true
Check local interface, you can see flannel interfaces to create a successful, pod follow-up to the newly created network will use flannel
]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:16:3e:04:eb:0e brd ff:ff:ff:ff:ff:ff 3: mybridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 0a:be:69:1e:02:70 brd ff:ff:ff:ff:ff:ff 7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 06:66:cd:4f:d2:9a brd ff:ff:ff:ff:ff:ff 8: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 02:02:85:cf:25:dd brd ff:ff:ff:ff:ff:ff 301: veth1b2b30e0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default link/ether 8e:47:4b:b8:10:be brd ff:ff:ff:ff:ff:ff link-netnsid 0 302: veth2147d829@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default link/ether fa:3f:fe:5d:91:82 brd ff:ff:ff:ff:ff:ff link-netnsid 1 303: veth54baeef4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default link/ether 9a:99:0f:82:ff:34 brd ff:ff:ff:ff:ff:ff link-netnsid 2
TIPS:
- There are crio following four profiles
File | Description |
---|---|
crio.conf (5) | CRI-O Configuration file |
policy.json(5) | Signature Verification Policy File(s) |
registries.conf(5) | Registries Configuration file |
storage.conf(5) | Storage Configuration file |
- minikube start appears: sudo: crictl: the Command not found , solution: crictl into / usr / bin directory, referring to the Issue . The reason is / etc / sudoers is not the path secure_path
- minikube start appears: [certs] Certificate apiserver-kubelet-Client not Signed by CA Certificate CA: Crypto / rsa: Verification error , solution: RM / var / lib / minikube / certs , referring to the issue
reference: