MySQL built by master (Kubernetes) important concepts k8s (on): Network with persistent volumes

Previous " to grasp by example k8s (Kubernetes) core concepts ," explained the core concept k8s, with the core concept of the entire skeleton will be complete, the program has to deal with stateless enough, but not enough fullness. Divided into two applications, stateless and stateful. General front and back-end programs are stateless, and the database is stateful, he needs the data stored, so even if the power failure, the data will not be lost. To create a stateful, we also need to introduce some additional k8s concepts. Although they are not the core, but is also very important, there are three, lasting volume, and network configuration parameters. Armed with these basic concepts already done a full coverage, k8s already beginning. We have to become familiar with these concepts by building k8s MySQL. The container itself is stateless, if there are problems it will be destroyed at any time, its stored data will be lost. MySQL need a persistence layer to save data still exists after the container is destroyed, k8s call lasting volume.

Create and validate MySQL Mirror:

On k8s Before installing MySQL, MySQL verify first with Docker image:

docker run --name test-mysql -p 3306:33060 -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7

"Root" is the root (root) user's password, here designated "root" MySQL user when creating container password. "Test-MySQL" is the name of the vessel. "Mysql: 5.7" with a "MySQL" 5.7 version docker library. This did not use the latest version 8.0, because the new version is not compatible with previous client, need to change a lot of things. Mirroring is used full version of Linux, so files are large, there is 400M.

After the vessel completed, type "docker logs test-mysql", view the log.

...
2019-10-03T06:18:50.439784Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.17'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
2019-10-03T06:18:50.446543Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060

Check vessel status.

vagrant@ubuntu-xenial:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                 NAMES
3b9c50420f5b        mysql:latest        "docker-entrypoint.s…"   11 minutes ago      Up 11 minutes       3306/tcp, 33060/tcp   test-mysql

To verify MySQL, you need to install MySQL client on a virtual machine.

sudo apt-get -y -f install mysql-client

After completion, type "docker inspect test-mysql" to find the IP address of the container, shown below "172.17.0.2" is the IP address of the container.

vagrant@ubuntu-xenial:~$ docker inspect test-mysql
...
 "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
...

Type "mysql -h 172.17.0.2 -P 3306 --protocol = tcp -u root -p" Log on to MySQL, "172.17.0.2" MySQL is an IP address, "3306" is the MySQL port is set when creating mirror given the port opening, "root" user name, "- p" is the password parameter options. After typing the command, the system requires input password, enter, has been successfully connected to MySQL.

vagrant@ubuntu-xenial:~$ mysql -h 172.17.0.2 -P 3306 --protocol=tcp -u root -p
...
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.27 MySQL Community Server (GPL)
...

Installing MySQL on k8s

Installed on k8s MySQL divided into three parts, create a deployment files, create files and installation and testing services.

Deployment (Deployment) file

The following is a deployment configuration file. In the previous article has explained in detail the file format, all k8s configuration file format is the same. On the "template" is a deployment configuration, from the "template" downward Pod configuration. From "containers" Pod initially disposed inside the container. "Env:" environment variable, here to set a user name and password for the database environment variables, it will be explained in detail later. MySQL port is "3306"

apiVersion: apps/v1
kind: Deployment  # 类型是部署 
metadata:
  name: mysql-deployment  # 对象的名字
spec:
  selector:
    matchLabels:
      app: mysql #用来绑定label是“mysql”的Pod
  strategy:
    type: Recreate
  template:   # 开始定义Pod 
    metadata:
      labels:
        app: mysql  #Pod的Label,用来标识Pod
    spec:
      containers: # 开始定义Pod里面的容器
        - image: mysql:5.7
          name: mysql-con
          imagePullPolicy: Never
          env:   #  定义环境变量
            - name: MYSQL_ROOT_PASSWORD  #  环境变量名
              value: root  #  环境变量值
            - name: MYSQL_USER
              value: dbuser
            - name: MYSQL_PASSWORD
              value: dbuser
          args: ["--default-authentication-plugin=mysql_native_password"]
          ports:
            - containerPort: 3306 # mysql端口 
              name: mysql 

Service (Service) file

The following is a service profile, and on an article about this configuration is basically the same, is not explained here.

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    app: mysql
spec:
  type: NodePort
  selector:
      app: mysql
  ports:
  - protocol : TCP
    nodePort: 30306
    port: 3306
    targetPort: 3306 

Installation and testing:

With the configuration file, the following start creating MySQL. To order, sequentially in the creation, starting with the bottom of the object to get started.

Creating and deploying service:

kubectl apply -f mysql-deployment
kubectl apply -f mysql-service.yaml

View Services:

vagrant@ubuntu-xenial:~$ kubectl get service
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP          3h42m
mysql-service   NodePort    10.102.253.32   <none>        3306:30306/TCP   3h21m

"Mysql-service" port (PORT (S)) has two, "3306" is k8s internal port "30306" is the external port. Because "NodePort" has opened the external port, then you can use the "30306" port access MySQL on a virtual machine.

vagrant@ubuntu-xenial:~$  mysql -h localhost -P 30306 --protocol=tcp -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.7.27 MySQL Community Server (GPL)
...
mysql>

Then local virtual machine had been connected with k8s, the next step you can use a graphical client to access the MySQL on a host computer (notebook). I was in Vagrant set in a private network, set the IP address of the virtual machine is "192.168.50.4", use this address and port 30306 to access MySQL.

file

The internet:

This network has two meanings, one of which is k8s network is to allow access to each other k8s internal services and internal service can be accessed from the outside k8s cluster. The other being between the host network (laptop) and a virtual machine, that is, the host can access the virtual machine. After both layers through, you can directly access k8s MySQL cluster inside the host.

k8s network:

k8s network also has two meanings, one is the internal cluster, k8s internal DNS, it can be addressed by the service name. Another is accessible from outside the cluster within the cluster service, a total of four ways, as detailed " Kubernetes NodePort LoadBalancer VS VS Ingress? Should the I use the When the What? "

  • LoadBalancer : the Load Balancer is not managed by the K8s. k8s by the External Load Balancer forwards the request to the internal service. This method requires a Load Balancer, general cloud environment can provide, but their local environment is not. But Minikube provides a program can simulate Load Balancer. You just type "minikube tunnel", it will simulate Load Balancer, to forward the request. But when you use the "Load Balancer" (in Minikube environment), generated each time you run the service IP and port is random, can not control, inconvenient to use, but in a formal environment where there is no problem .

Here is the information service, "EXTERNAL-IP" is "pending", explained external network is disconnected.

$ kubectl get service
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP      10.96.0.1        <none>        443/TCP        31d
nginx-service   LoadBalancer   10.104.228.212   <pending>     80:31999/TCP   45h

Here is the run "minikube tunnel" information service (running in another window) after, "EXTERNAL-IP" is "10.104.228.212". At this point the LoadBalancer Minikube has worked, you can now access internal k8s service from the outside through the IP address, "80" is k8s internal port, "31999" is k8s external port.

$ kubectl get service
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
kubernetes      ClusterIP      10.96.0.1        <none>           443/TCP        31d
nginx-service   LoadBalancer   10.104.228.212   10.104.228.212   80:31999/TCP   45h 

This is a better way, but it can not control its IP address and port, so I do not use it.

  • NodePort : This method can open a port on the outside of each Node, each request is directed to this port are forwarded to a service. Its advantage is that you can specify a fixed port (port ranges can only be 30000-32767), so I do not have to replace MySQL port access when in a notebook. If you do not specify, the system will randomly assign one. The disadvantage is that each port can only have one service, and the value of the port is limited, and therefore not suitable for production environments. But in the Windows environment, because I use Vagrant fixed IP address of the virtual machine, this problem does not exist. So it is the best choice.
  • ClusterIP : This can only be addressed in k8s the cluster.

  • Ingress : This is the recommended method is generally used in a production environment. Load balancer problem is that each service must have a Load balancer, would be more trouble after the service, then it will use Ingress, its shortcomings are more complicated to configure. Minikube comes with a controller based on Nginx of Ingress, simply run the "minikube addons enable ingress", on the line. But Ingress setup is more complex, and therefore there is no use of it.

Virtual machine network:

This is about the host access each other (laptop) and a virtual machine, the virtual machine access mainly from the host. I use the Vagrant, so to be configured in Vagran configuration file (Vagrantfile) years. It has two methods:

  • Forwarding port : it can forward the request to a specific port to the specified port on a laptop virtual machine, it is quite easy. If you just do not know in advance which port, or port is changed, too much trouble. Vagrant configuration commands: "config.vm.network" forwarded_port ", guest : 3306, host: 3306, auto_correct: true"
  • Private network : This is a very flexible way. Can each be set to a fixed IP address of the host and the virtual machine, such visits may be bidirectional. Any port are not the problem, the only drawback is that you have to pre-determine the IP address. For details, see "Vagrant Reverse Port Forwarding?" . Vagrant configuration commands: "config.vm.network" private_network ", ip :" 192.168.50.4 "

When configuring the private network, configure "Host-only Adapter" on VirtualBox notebook, as shown in FIG.

file

But this will cause the following error is generated when Vagrant start Minikube: "VBoxManage.exe: error: Failed to create the host-only adapter". This is a Bug VirtualBox, you can download a software solution, see here . This software is already four years ago, I began to also worry about whether compatible with the current version of VirtualBox, with good results, but it is a separate operation the software does not conflict with the current software. Just before starting the virtual machine, run by an administrator of the patch on the line. Another problem is that I had to use a version 5.x VirtualBox, the above figure can only choose "NAT", can not choose "Host-only Adapter", after upgrading to 6.X to choose "Host-only Adapter" . But when the virtual machine is restarted, it will automatically change back to "NAT", but private networks still available.

Create a persistent volumes (PersistentVolume):

The concept k8s volumes, including volumes and persistent volumes.

Volume (volume):

The volume is k8s storage concepts, Pod attached to it, it can not exist alone. But it is not in the container layer. Therefore, if the container is restarted, the volume still. But if the Pod is restarted, the volume is lost. If there are a plurality of Pod containers, these containers Pod shared volume. You can think of volume as a directory, which can store a variety of files. k8s support various types of volumes, for example, local file system and various cloud storage.

Persistent volumes (PersistentVolume):

Is the volume of a package, the purpose is to better manage the volume. Its life cycle is not required to bind with the Pod, it can exist independently of the Pod.

Persistent volumes application (PersistentVolumeClaim):

Is a persistent volumes application resources, you can apply a specific size and storage capacity of access patterns, for example, read-write mode or a read-only mode. k8s will apply in accordance with lasting volume allocation for the persistent volumes, if not suitable, the system automatically creates a. Persistent volumes application is an abstract persistent volumes, like in programming interface (Interface), which can have different specific implementation (lasting volume). For example, different support Ali cloud and Huawei cloud storage system that produces long-lasting volume is not the same. Persistent volumes are tied to a specific storage implementation. What do you want from Ali cloud migration program to Huawei cloud, how to ensure the compatibility of the configuration file it? You do this with a long-lasting volume application interfaces, only the provisions of storage capacity sizes and access patterns, and automatically generated by Ali cloud and Huawei each cloud clouds meet the interface requirements of persistent volumes, but it also has a limiting condition, that is durable and long-lasting volume application volume StorageClass need to match, which makes it less flexible interface. More about this later.

Dynamic persistent volumes:

In this case, you only need to create long-lasting volume application (no need to create a separate persistent volumes), and then bind persistent volumes application and deployment. The system will automatically create the application in accordance with persistent volumes persistent volumes. The following is a persistent volumes application configuration file. Where "storage: 1Gi", refers to the size of the space application is 1G.

Persistent volumes application configuration file:
shell apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi #持久卷的容量是 1 GB
Mount lasting volume application deployment:

Here is a persistent volumes mount application deployment configuration file. It does this by applying persistent volumes used as volume persistence, bound with Pod. Please read the file comment on persistent volumes.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-deployment
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.7
          name: mysql-con
          imagePullPolicy: Never
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: root
            - name: MYSQL_USER
              value: dbuser
            - name: MYSQL_PASSWORD
              value: dbuser
          args: ["--default-authentication-plugin=mysql_native_password"]
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts: # 挂载Pod上的卷到容器
            - name: mysql-persistent-storage # Pod上卷的名字,与“volumes”名字匹配
              mountPath: /var/lib/mysql # 挂载的Pod的目录
      volumes:   # 挂载持久卷到Pod
        - name: mysql-persistent-storage # 持久卷名字, 与“volumMounts”名字匹配
          persistentVolumeClaim: 
            claimName: mysql-pv-claim  # 持久卷申请名字

Here you specify only the Pod mount directory and no directory virtual machine (host) is specified, it will be mentioned later how to find the directory of the virtual machine (the system automatically assigns the mount directory).

Run Deployment:

Type "kubectl apply -f mysql-volume.yaml" create lasting volume application in which it was created at the same time, the system automatically creates lasting volume.

Check lasting volume application

vagrant@ubuntu-xenial:~/dockerimages/kubernetes/mysql$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa   1Gi        RWO            standard       10m

View persistent volumes request details

vagrant@ubuntu-xenial:/mnt$ kubectl describe pvc mysql-pv-claim
Name:          mysql-pv-claim
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa
Labels:        app=mysql
...

Show persistent volumes:

vagrant@ubuntu-xenial:/mnt$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa   1Gi        RWO            Delete           Bound    default/mysql-pv-claim   standard                24h

Type "kubectl describe pv pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa", show persistent volumes details. As can be seen from here, persistent volumes on the virtual machine in a position: "Path: / tmp / hostpath-provisioner / pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa".

vagrant@ubuntu-xenial:/mnt$ kubectl describe pv pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa
Name:            pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa
Labels:          <none>
Annotations:     hostPathProvisionerIdentity: 19948fdf-e67f-11e9-8fbd-026a5b40726f
                 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    standard
Status:          Bound
Claim:           default/mysql-pv-claim
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/hostpath-provisioner/pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa
    HostPathType:
Events:            <none>

Check out the MySQL directory information:

vagrant@ubuntu-xenial:/tmp/hostpath-provisioner/pvc-ac6c88d5-ef5a-4a5c-b499-59715a2d60fa$ ls -al
total 188488
drwxrwxrwx 6  999 docker     4096 Oct  4 13:23 .
drwxr-xr-x 3 root root       4096 Oct  4 12:58 ..
-rw-r----- 1  999 docker       56 Oct  4 12:58 auto.cnf
-rw------- 1  999 docker     1679 Oct  4 12:59 ca-key.pem
-rw-r--r-- 1  999 docker     1107 Oct  4 12:59 ca.pem
-rw-r--r-- 1  999 docker     1107 Oct  4 12:59 client-cert.pem
-rw------- 1  999 docker     1679 Oct  4 12:59 client-key.pem
-rw-r----- 1  999 docker      668 Oct  4 13:21 ib_buffer_pool
-rw-r----- 1  999 docker 79691776 Oct  4 13:23 ibdata1
-rw-r----- 1  999 docker 50331648 Oct  4 13:23 ib_logfile0
-rw-r----- 1  999 docker 50331648 Oct  4 12:58 ib_logfile1
-rw-r----- 1  999 docker 12582912 Oct  4 13:24 ibtmp1
drwxr-x--- 2  999 docker     4096 Oct  4 12:58 mysql
drwxr-x--- 2  999 docker     4096 Oct  4 12:58 performance_schema
-rw------- 1  999 docker     1679 Oct  4 12:59 private_key.pem
-rw-r--r-- 1  999 docker      451 Oct  4 12:59 public_key.pem
-rw-r--r-- 1  999 docker     1107 Oct  4 12:59 server-cert.pem
-rw------- 1  999 docker     1675 Oct  4 12:59 server-key.pem
drwxr-x--- 2  999 docker     4096 Oct  4 13:18 service_config
drwxr-x--- 2  999 docker    12288 Oct  4 12:58 sys

Lasting recovery mode volume:

After enduring and lasting volume application volume is deleted, it has three recovery mode.

  • Kept (Retain) : When the application is deleted persistent volumes, volumes are still lasting. You can manually lasting recovery in the volume of data.
  • ** Delete (Delete) **: durable and lasting volume application volume are removed, the underlying data storage will be deleted. When dynamic persistent volumes, the default mode is Delete. Of course, you can modify its recovery mode after persistent volumes are created.
  • ** Recycling (Recycle) **: in this way is no longer recommended for use, it is recommended replaced with Retain.

Static persistent volumes:

Dynamic volume is a persistent problem it is the default recovery mode, "delete", so that when the virtual machine is restarted, persistent volumes will be deleted. When you re-run the deployment, k8s will create a new MySQL, MySQL so that the original will be lost in the new information, which we wish to see. Although you can manually modify the recovery as "hold", but still have to manually recover the original data in the persistent volumes.
One solution is to build lasting volume on the host, so that even if the virtual machine is restarted out of the question, MySQL in the new information is still not lost. If on the cloud, will have dedicated memory layer, if it is local, there are basically three ways:

  • Local: The storage mounted from the host to the cluster k8s details, see:. " Volumes ."
  • HostPath : the store is mounted from the host to the k8s cluster, but it has many limitations, for example, only supports single-node (Node), and only supports "ReadWriteOnce" mode. For details, see: " hostPath AS Volume in Kubernetes ."
  • NFS : Network File System, which is the most flexible, but you need to install NFS server. For details, see: " Kubernetes Volumes Guide ."

I chose the relatively simple "Local" mode. In this way, you must create a separate persistent volumes, you can not only create lasting volume application and let the system automatically creates lasting volume.

Here is the configuration file "Local" mode, which is the lasting volume and long-lasting volume applications written in a file. When using "Local" mode, you need to set "nodeAffinity" section, where "values: - minikube" of "Minikube" is k8s Cluster Node name, "Minikube" only support a Node, both "Master Node", but also " Worker Node ".

Lasting volume and application configuration file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName:  standard #持久卷存储类型,它需要与持久卷申请的类型相匹配
  local:
    path: /home/vagrant/database/mysql #宿主机的目录
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - minikube # Node的名字
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  # storageClassName:  # 这里的存储类型注释掉了
  resources:
    requests:
      storage: 1Gi #1 GB

If you do not know Node name, the following command can be used to view:

vagrant@ubuntu-xenial:/$ kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
minikube   Ready    master   6d3h   v1.15.2

After switching to a static persistent volumes, only lasting volume configuration file has changed, deployment, and service configuration file has not changed. Re-run and long-lasting volume deployment, after the success, even restart the virtual machine, the new content which has not lost MySQL.

Note that storageClassName usage. k8s provisions of persistent volumes and long-lasting volume applications storageClassName must match, this time will be allocated to the persistent volumes lasting volume applications. We here persistent volumes application does not specify storageClassName, then the system will use the default storageClass.

Check whether the default installation storageClass

vagrant@ubuntu-xenial:/$ kubectl get sc
NAME                 PROVISIONER                AGE
standard (default)   k8s.io/minikube-hostpath   6d3h
vagrant@ubuntu-xenial:/$

For more information see the default storageClass

vagrant@ubuntu-xenial:/$ kubectl describe sc
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           k8s.io/minikube-hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

As can be seen from here, the default storageClass Minikube installed, its name is "standard". The above application there are no persistent volumes specified storageClass, so the system uses the default storageClass match, but storageClassName above persistent volumes are "standard", just to accompanied. For details, see " Dynamic Storage Provisioning and the Classes in Kubernetes "

Stepped pit:

  1. Use Hyper-V or VirtualBox

VirtualBox and Hyper-V are not compatible, can only choose one (of course you can switch between the two, but too much trouble). I upload a Windows VirtualBox, operating normally. After entering Vagrant, "ubuntu" version of Linux installed. Then, when you start Minikube, you can type "minikube start --vm-driver = virtualbox", but the system displays "This computer does not have VT-X / AMD-v enabled. Enabling it in the BIOS is mandatory" . I follow the recommendation of the Internet to modify the BIOS "VT-X / AMD-v", but I do not have this option in the BIOS. Other methods have been tried, none of them succeeded. But as it has been installed VirtualBox, Hyper-V can not be used up. Can only use another method, using the command "minikube start --vm-driver = none". Fortunately, this method works well.

When using "minikube start --vm-driver = virtualbox", you are the first to build a virtual machine, and then run Minikube on a virtual machine. When using "minikube start --vm-driver = none", Minikube run directly on the host. It is my version of Windows does not directly support k8s, I have Windows tops the Linux virtual machine, and managed by Vagrant. If "minikube start --vm-driver = virtualbox", that is, on a Linux virtual machine and a virtual machine installed. Now with "minikube start --vm-driver = none", ostensibly to run on the host, in fact, is already running on the Windows, Linux virtual machine.

  1. Login k8s cluster

When using "minikube start --vm-driver = none " start Minikube, can not be used "minikube ssh" login k8s cluster, and thus already no virtual machine, and is mounted directly on the host, there is no need "minikube ssh . " But you can log on to the Pod, be ordered as follows:
"kubectl Exec -ti MySQL-Deployment-56c9cf5857-fffth - / bin / bash." Where the "mysql-deployment-56c9cf5857-fffth " is the name of Pod.

  1. Create duplicate names or PVC PV

When the original is still PV or PVC, and you create a new PV, and with the same name as the original, you will get the following error:
at The persistentvolumeclaim "MySQL-pv-the Claim" IS invalid: spec: Forbidden: IS immutable after creation except resources.requests for bound claims
At this point, you need the original PV PVC or delete and re-create new ones.

Please continue to read next. " By building MySQL master k8s (Kubernetes) important concepts (under): Parameter Configuration "

index:

  1. To grasp k8s (Kubernetes) core concepts with practical examples
  2. Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?
  3. Vagrant reverse port forwarding?
  4. hostPath as volume in kubernetes
  5. Volumes
  6. Volumes Kubernetes Guide
  7. Dynamic Provisioning and Storage Classes in Kubernetes

This article from the blog article multiple platforms OpenWrite release!

Guess you like

Origin www.cnblogs.com/code-craftsman/p/11661959.html