Cloud native Istio installation and use


1 Kubernetes cluster environment

insert image description here

Istio supports installing its control plane under different platforms, such as Kubernetes, Mesos, and virtual machines.
The course is based on Kubernetes to explain how to install Istio in the cluster (Istio 1.0.6 requires Kubernetes version 1.11 and above).
You can build an Istio environment locally or on a public cloud, or you can directly use the hosting service that has integrated Istio on the public cloud platform.

目前有许多软件提供了在本地搭建Kubernetes集群的能力,例如Minikube/kubeadm都可以搭建kubernetes集群,我这边所选用的是kubeadm来安装Kubernetes集群。
Kubeadm 是一个工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。

prepare the machine

Two centos7 virtual machines, the addresses are

192.168.187.137

192.168.187.138

大家根据自己的情况来准备centos7的虚拟机。

虚拟机要保证彼此之间能够ping通,也就是处于同一个网络中。

Kubernets官网推荐虚拟机的配置最低要求:2核2G(这边建议最低2核3G配置)

Docker environment

在每一台机器上都安装好Docker,我这边使用的版本为18.09.0

# docker查看版本命令
  docker --version

Modify the hosts file

(1) Set the master role and open the hosts file at 192.168.187.138

# 打开hosts文件
vi /etc/hosts
# 设置192.168.187.138为master的hostname,用m来表示
192.168.187.138 m
# 设置192.168.187.137为worker的hostname,用w1来表示
192.168.187.137 w1

(2) Set the worker role and open the hosts file at 192.168.187.137

# 打开hosts文件
vi /etc/hosts
# 设置192.168.187.138为master的hostname,用m来表示
192.168.187.138 m
# 设置192.168.187.137为worker的hostname,用w1来表示
192.168.187.137 w1

(3) Use ping to test

ping m

ping w1

kubeadm installation version

The installed version is 1.14.0

kubernetes cluster network plug-in-calico

calico network plugin: https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

calico,同样在master节点上操作

Calico为容器和虚拟机工作负载提供一个安全的网络连接。

Verify the Kubernetes installation

1) Check the cluster information on the master node

Command: kubectl get nodes

2) Monitor the status of the w1 node: kubectl get nodes -w

Monitor into ready state

3) Query pod command: kubectl get pods -n kube-system

Note: There are many ways to install Kubernetes clusters. You can install Kubernetes in the way you are familiar with. Here is just an introduction to the kubernets cluster environment used in this course.

2 Install Istio

Download and decompress the installation package on the Istio version release page https://github.com/istio/istio/releases/tag/1.0.6 (I use a relatively stable version 1.0.6, put it on the master, Take the istio-1.0.6-linux.tar.gz of the Linux platform as an example)

1. Unzip tar -xzf istio-1.0.6-linux.tar.gz

2. Enter the istio directory cd istio-1.0.6/

Istio installation directory and its description

file/folder illustrate
bin Contains client tools for interacting with Istio APIS
install It contains Istio installation scripts and files for Consul and Kubernetes platforms, which are divided into YAML resource files and Helm installation files on Kubernetes platforms
istio.VERSION The configuration file contains environment variables for version information
samples Contains various application examples used in official documents, such as bookinfo/helloworld, etc. These examples can help readers understand the functions of Istio and how to interact with various components of Istio
tools Contains script files and tools for performance testing and testing on local machines

There are several ways to install Istio:

  • Use istio-demo.yaml in the install/kubernetes folder to install;
  • Use the Helm template to render the Istio YAML installation file for installation;
  • Use Helm and Tiller to install.

The course is used to install using istio-demo.yaml in the install/kubernetes folder

2.1 Rapid deployment of Istio

Introduction to Kubernetes CRDs

For example, resources such as Deployment/Service/etc are types supported by kubernetes itself. In addition to these types, kubernetes also supports resource expansion. To put it bluntly, you can customize resource types. If there is no CRD support, some resource types in istio are created. unsuccessful

#crds.yaml路径:
istio-1.0.6/install/kubernetes/helm/istio/templates/crds.yaml
# 执行
kubectl apply -f crds.yaml
# 统计个数
kubectl get crd -n istio-system | wc -l

The Kubernetes platform has systematic support for many important modules of distributed service deployment. With the help of the following platform resources, most distributed system deployment and management requirements can be met. However, in different application business environments, there may be some differences for the platform. Special requirements, these requirements can be abstracted as Kubernetes extended resources, and Kubernetes' CRD (CustomResourceDefinition) provides a lightweight mechanism for such requirements

Execute the installation command

(1) Create resources according to istio-1.0.6/install/kubernetes/istio-demo.yaml

kubectl apply -f istio-demo.yaml
# 会发现有这么多的资源被创建了,很多很多	,里面的命名空间用的是istio-system

2) View core component resources

kubectl get pods -n istio-system 
kubectl get svc -n istio-system 

insert image description here

It can be seen that 3 of them are completed, and all other components must be running. Completed means that the JOB resources in k8s are used, indicating that the task has been executed.

It can be seen that for example, citadel, pilot, and sidecar are available, and others such as ingress gateway and monitoring are also available.

2.2 Review K8S components and use

Review the kubernetes components involved in the course

2.2.1 Deployment

一旦运行了 Kubernetes 集群,就可以在其上部署容器化应用程序。 为此,需要创建 Kubernetes Deployment 配置。
Deployment 负责 Kubernetes 如何创建和更新应用程序的实例。
创建 Deployment 后,Kubernetes master 将应用程序实例调度到集群中的各个节点上。

Create nginx_deployment.yaml file

apiVersion: apps/v1 ## 定义了一个版本
kind: Deployment ##k8s资源类型是Deployment
metadata: ## metadata这个KEY对应的值为一个Maps
  name: nginx-deployment ##资源名字 nginx-deployment
  labels: ##将新建的Pod附加Label
    app: nginx ##一个键值对为key=app,valuen=ginx的Label。
spec: #以下其实就是replicaSet的配置
  replicas: 3 ##副本数为3个,也就是有3个pod
  selector: ##匹配具有同一个label属性的pod标签
    matchLabels: ##寻找合适的label,一个键值对为key=app,value=nginx的Labe
      app: nginx
  template: #模板
    metadata:
      labels: ##将新建的Pod附加Label
        app: nginx
    spec:
      containers:  ##定义容器
      - name: nginx ##容器名称
        image: nginx:1.7.9 ##镜像地址
        ports:
        - containerPort: 80 ##容器端口

(1) Execute resource file commands

kubectl apply -f nginx_deployment.yaml

(2) View pods

kubectl get pods
查看pod详情
kubectl get pods -o wide

(3) View the deployment command

kubectl get deployment

(4) View deployment details command

kubectl get deployment -o wide

2.2.2 Labels and Selectors

顾名思义,就是给一些资源打上标签的

labels

When there are many resources, you can use labels to classify resources

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
 
 # 表示名称为nginx-pod的pod,有一个label,key为app,value为nginx。
 #我们可以将具有同一个label的pod,交给selector管理

selectors

If I want to use the k8s resources in this label, I need to use the selector component in k8s, and use the selector to match a specific label

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx-deployment
  labels:  # 定义了一个labels,key=app,value=nginx
    app: nginx
spec:
  replicas: 3
  selector:             # 用selector匹配具有同一个label属性的pod标签
    matchLabels:
      app: nginx         

View the label label command of the pod:

kubectl get pods --show-labels

2.2.3 Namespace

命名空间就是为了隔离不同的资源。比如:Pod、Service、Deployment等。可以在输入命令的时候指定命名空间`-n`,如果不指定,则使用默认的命名空间:default。

Take a look at the currently used namespace:kubectl get namespaces/ns

Check out the kube-system namespace:kubectl get pods -n kube-system

(1) Create your own namespace

my-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
    name: myns

(2) Execute the command: kubectl apply -f my-namespace.yaml

(3) View commands

kubectl get ns

delete namespace

kubectl delete namespaces Name of the space

注意:
删除一个namespace会自动删除所有属于该namespace的资源。
default和kube-system命名空间不可删除。

2.2.4 Service

集群内部访问方式(ClusterIP)

Pod虽然实现了集群内部互相通信,但是Pod是不稳定的,比如通过Deployment管理Pod,随时可能对Pod进行扩缩容,这时候Pod的IP地址是变化的。能够有一个固定的IP,使得集群内能够访问。也就是之前在架构描述的时候所提到的,能够把相同或者具有关联的Pod,打上Label,组成Service。而Service有固定的IP,不管Pod怎么创建和销毁,都可以通过Service的IP进行访问

k8s用service来解决这个问题,因为service会对应一个不会的ip,然后内部通过负载均衡到相同label上的不同pod机器上

(1) Create whoami-deployment.yaml file

apiVersion: apps/v1 ## 定义了一个版本
kind: Deployment ##资源类型是Deployment
metadata: ## metadata这个KEY对应的值为一个Maps
  name: whoami-deployment ##资源名字
  labels: ##将新建的Pod附加Label
    app: whoami ##key=app:value=whoami
spec: ##资源它描述了对象的
  replicas: 3 ##副本数为1个,只会有一个pod
  selector: ##匹配具有同一个label属性的pod标签
    matchLabels: ##匹配合适的label
      app: whoami
  template: ##template其实就是对Pod对象的定义  (模板)
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami ##容器名字  下面容器的镜像
        image: jwilder/whoami
        ports:
        - containerPort: 8000 ##容器的端口

jwilder/whoami This is an image that can be pulled from the docker warehouse, and it is a demo image provided by the official

(1) execute command

kubectl apply -f whoami-deployment.yaml

(2) View details

kubectl get pods -o wide

(3) Normal access in the cluster

curl 192.168.221.80:8000/192.168.14.6:8000/192.168.14.7:8000

(5) Test: Delete one of the pods and check if the regenerated ip has changed

kubectl delete pod  whoami-deployment-678b64444d-jdv49

The address of the newly added pod has changed

(6) Service debut

Query resources under the svc namespace

kubectl get svc

(7) Create your own service space

创建:kubectl expose deployment deployment名字
例如:kubectl expose deployment whoami-deployment

(8) Re-query the service space, and you will find that there is a whoami-deployment service with an ip of 10.107.4.74

[root@m k8s]# kubectl get svc
NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.96.0.1     <none>        443/TCP    2d12h
whoami-deployment   ClusterIP   10.107.4.74   <none>        8000/TCP   3s

(9) Access service: curl 10.107.4.74:8000

After a few more tries, you will find that the service will be loaded on one of the pods

(10) View service

kubectl describe svc service名字 
例如:kubectl describe svc whoami-deployment
[root@m k8s]# kubectl describe svc whoami-deployment
Name:              whoami-deployment
Namespace:         default
Labels:            app=whoami
Annotations:       <none>
Selector:          app=whoami
Type:              ClusterIP
IP:                10.107.4.74
Port:              <unset>  8000/TCP
TargetPort:        8000/TCP
Endpoints:         192.168.190.86:8000,192.168.190.87:8000,192.168.190.89:8000
Session Affinity:  None
Events:            <none>
# 说白了 service下面挂在了Endpoints节点

(11) Expand the original node capacity to 5

kubectl scale deployment whoami-deployment --replicas=5

(12) Delete the service command

kubectl delete service service名字 
kubectl delete service whoami-deployment

Summary : In fact, the significance of the existence of Service is for the instability of Pod, and the above discussion is a type of Cluster IP about Service

外部服务访问集群中的Pod(NodePort)

It is also a type of Service, which can be passed through NodePort

To put it bluntly, because the outside can access the physical machine IP of the cluster, it is to expose the same port lock on each physical machine in the cluster, such as 32008

insert image description here

operate

(1) Delete the previous service first

kubectl delete svc whoami-deployment

(2) Check the command again

kubectl get svc
发现whoami-deployment已被删除

(3) View pod commands

kubectl get pods 

(4) Create a service of type NodePort

kubectl expose deployment whoami-deployment --type=NodePort

View: kubectl get svc

insert image description here

And a port is generated, there will be a port 8000 mapped to port 31504 of the host

Note that the above-mentioned port 31504 is actually the port exposed on the physical machine in the cluster

lsof -i tcp:31504
netstat -ntlp|grep 31504

The browser accesses through the IP of the physical machine

http://192.168.187.137:31504/
curl 192.168.187.137:31504/

Although NodePort can meet the requirements of external access to Pod, it needs to occupy the ports on each physical host

delete resources

kubectl delete -f whoami-deployment.yaml
kubectl delete svc whoami-deployment

2.2.5 Ingress

We also learned earlier that we can use the service nodeport method to achieve external access to Pod requirements, but it will occupy the ports on each physical host, so this method is not good

delete resources

# 删除pod 
kubectl delete -f whoami-deployment.yaml
# 删除service
kubectl delete svc whoami-deployment

Next, based on the requirement of external access to the internal cluster, use Ingress to realize the requirement of accessing whoami.

(1) Create whoami-service.yaml file

Create pods and services

apiVersion: apps/v1 ## 定义了一个版本
kind: Deployment ##资源类型是Deployment
metadata: ## metadata这个KEY对应的值为一个Maps
  name: whoami-deployment ##资源名字
  labels: ##将新建的Pod附加Label
    app: whoami ##key=app:value=whoami
spec: ##资源它描述了对象的
  replicas: 3 ##副本数为1个,只会有一个pod
  selector: ##匹配具有同一个label属性的pod标签
    matchLabels: ##匹配合适的label
      app: whoami
  template: ##template其实就是对Pod对象的定义  (模板)
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami ##容器名字  下面容器的镜像
        image: jwilder/whoami
        ports:
        - containerPort: 8000 ##容器的端口
---
apiVersion: v1
kind: Service
metadata:
  name: whoami-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8000
  selector:
    app: whoami

(2) Execute resource commands

kubectl apply -f whoami-service.yaml

(3) Create whoami-ingress.yaml file

apiVersion: extensions/v1beta1
kind: Ingress # 资源类型
metadata:
  name: whoami-ingress # 资源名称
spec:
  rules: # 定义规则
  - host: whoami.qy.com  # 定义访问域名
    http:
      paths:
      - path: / # 定义路径规则,/ 表示能够命中任何路径规则
        backend:
          serviceName: whoami-service  # 把请求转发给service资源,这个service就是我们前面运行的service
          servicePort: 80 # service的端口

(4) Execute the command:

kubectl apply -f  whoami-ingress.yaml

(5) View ingress resources:

kubectl get ingress

(6) View ingress resource details:

kubectl describe ingress whoami-ingress

(7), modify the hosts file of win, add dns analysis

192.168.187.137 whoami.qy.com

(8) Open the browser and visit whoami.qy.com

Process summary

insert image description here

The browser sends a request to the ingress, and the ingress forwards the request to the corresponding service according to the rule configuration. Since the service configures the pod, the request is finally sent to the corresponding service in the pod

Summarize

Ingress forwarding requests is more flexible and does not need to occupy the port of the physical machine, so it is recommended to use this method to forward external requests to the cluster

2.3 Preliminary experience with istio

In docker, services are deployed through containers, and in k8s, services are deployed through pods, so how to reflect sidecar in istio?

猜想:Will there be a sidecar container in the pod besides the container required by the business?

Verify conjecture

(1) Prepare a resource first-istio.yaml

apiVersion: apps/v1 ## 定义了一个版本
kind: Deployment ##资源类型是Deployment
metadata:
    name: first-istio 
spec:
    selector:
       matchLabels:
         app: first-istio
    replicas: 1
    template:
       metadata:
         labels:
           app: first-istio
       spec:
         containers:
      - name: first-istio ##容器名字  下面容器的镜像
           image: registry.cn-hangzhou.aliyuncs.com/sixupiaofei/spring-docker-demo:1.0
           ports:
        - containerPort: 8080 ##容器的端口
---
apiVersion: v1
kind: Service ##资源类型是Service
metadata:
    name: first-istio ##资源名字first-istio
spec:
    ports:
  - port: 80 ##对外暴露80
       protocol: TCP ##tcp协议
       targetPort: 8080 ##重定向到8080端口
    selector:
       app: first-istio ##匹配合适的label,也就是找到合适pod
    type: ClusterIP ## Service类型ClusterIP

Create the folder istio, and then put first-istio in it. According to the normal creation process, there will only be its own private containers, and there will be no sidecar

#执行,会发现 只会有一个containers在运行
kubectl apply -f first-istio.yaml
#查看first-isitio service
kubectl get svc
# 查看pod的具体的日志信息命令
kubectl describe pod first-istio-8655f4dcc6-dpkzh
#删除
kubectl delete -f first-istio.yaml

View pod commands

kubectl get pods

insert image description here

Thinking: How to automatically add a Sidecar to the pod?

There are two ways: manual injection and automatic injection

2.4 Manual injection

(1) Delete the above resources, recreate them, and use manual sidecar injection

istioctl kube-inject -f first-istio.yaml | kubectl apply -f -

**Note: **istioctl command needs to configure PATH in /etc/profile first

  • vim /etc/profile

  • Add isito installation directory configuration

export ISTIO_HOME=/home/tools/istio-1.0.6
export PATH=$PATH:$ISTIO_HOME/bin
  • load profile file
source profile

(2) View the number of pods

kubectl get pods # 注意该pod中容器的数量 ,会发现容器的数量不同了,变成了2个

insert image description here

(3) View service

kubectl get svc

think:

There is only one container in my yaml file, why are there two after execution?

My guess is that the other one will be Sidecar, then I will describe this pod and see what these two containers are

# 查看pod执行明细
kubectl describe pod first-istio-75d4dfcbff-qhmxj

insert image description here

I found that there is an additional proxy container besides my container. At this time, we boldly guess whether this proxy is a sidecar.

then scroll up

insert image description here

Now we have the answer we need

View the content of the yaml file

kubectl get pod first-istio-75d4dfcbff-qhmxj -o yaml

insert image description here

Summarize

This yaml file is no longer our original yaml file. You will find that this yaml file also defines a proxy image, which we have prepared in advance, so istio injects a proxy by changing the yaml file

(4) Delete resources

istioctl kube-inject -f first-istio.yaml | kubectl delete -f -

**Thinking:** Do I have to write a long list of commands to create a sidecar every time in the future? Is there a normal command to create a sidecar directly?

2.5 Automatic sidecar injection

首先自动注入是需要跟命名空间挂钩,所以需要创建一个命名空间,只需要给命名空间开启自动注入,后面创建的资源只要挂载到这个命名空间下,那么这个命名空间下的所有的资源都会自动注入sidecar了

(1) Create a namespace

kubectl create namespace my-istio-ns

(2) Turn on automatic injection for the namespace

kubectl label namespace my-istio-ns istio-injection=enabled

(3) Create a resource and specify a namespace

# 查询  istio-demo命名空间下面是否存在资源
kubectl get pods -n my-istio-ns
# 在istio-demo命名空间创建资源
kubectl apply -f first-istio.yaml -n my-istio-ns

(4) View resources

kubectl get pods -n my-istio-ns

(5) View resource details

kubectl describe pod pod-name -n my-istio-ns

insert image description here

I found that there is still an extra proxy container besides my container

(6) View service

kubectl get svc -n my-istio-ns

(7) Delete resources

kubectl delete -f first-istio.yaml -n my-istio-ns

Everyone should have understood how istio injects sidecar

Sidecar injection summary:

Whether it is automatic injection or manual injection, the principle is to add a proxy container in the yaml file. This proxy container is sidecar. Here, automatic injection is recommended to realize sidecar injection.

Guess you like

Origin blog.csdn.net/ZGL_cyy/article/details/130470528