OpenShift 4 之Istio-Tutorial (2) 部署三个微服务

本系列OpenShift Servic Mesh教程是基于Red Hat官方公开发行的《Introducing Istio Service Mesh for Micoservices》出版物,我将所有操作在OpenShift 4.2.x环境中进行了验证。喜欢读英文或者需要了解更多和场景相关知识点的小伙伴可以通过上面的链接下载该书慢慢阅读。

本系列演示的场景使用了基于Java实现的三个微服务:Customer、Preference、Recommendation,它们的调用关系是Customer ⇒ Preference ⇒ Recommendation,其中Recommendation有多个版本。本章节主要部署这三个微服务并能从外部访问它们。

  1. 首先将教程中使用到的代码下载到本地。由于我修改过部分代码,因此建议不要使用上游项目。
$ git clone https://github.com/liuxiaoyu-git/istio-tutorial.git
$ cd istio-tutorial
  1. 然后创建名为tutorial的OpenShift项目,再为该项目的默认Service Account添加scc特权。
$ oc new-project tutorial
$ oc adm policy add-scc-to-user privileged -z default -n tutorial
  1. 以Customer微服务为例,我们可以查看customer/kubernetes/Deployment.yml文件。此文件定义了如何部署Customer微服务,其中部署的容器镜像是“quay.io/rhdevelopers/istio-tutorial-customer:v1.1”;而将“sidecar.istio.io/inject”设为“true”是为了实现自动注入微服务的Sidecar。
  2. 执行以下命令,部署Customer、Preference、Recommendation微服务并创建对应的Service。
$ oc apply -f customer/kubernetes/Deployment.yml -n tutorial
$ oc apply -f customer/kubernetes/Service.yml 
$ oc apply -f preference/kubernetes/Deployment.yml -n tutorial
$ oc apply -f preference/kubernetes/Service.yml 
$ oc apply -f recommendation/kubernetes/Deployment.yml -n tutorial
$ oc apply -f recommendation/kubernetes/Service.yml 
  1. 查看运行微服务的Pod运行状态,完成后在tutorial中应该运行了3个Pod。每个Pod中运行2个Container,其中一个运行微服务,另一个运行Istio中的Sidecar。
$ oc get pod -n tutorial
NAME                                 READY   STATUS    RESTARTS   AGE
customer-77dc47d7f8-szhd5            2/2     Running   0          32h
preference-v1-55476494cf-xm4dq       2/2     Running   0          32h
recommendation-v1-67976848-4l4s7     2/2     Running   0          32h

注意:如果此时看到的Pod中只要一个容器,通常是由于没有将当前项目名"tutorial"添加到OpenShift Service Mesh Operator的Service MesMemberRoll中的members。
6. 运行命令,查看Customer微服务运行Pod中包括容的情况,其中一个运行微服务的容器customer,另一个容器运行sidecar的容器istio-proxy。

$ oc get pods -o jsonpath="{.items[*].spec.containers[*].name}" -l app=customer
customer istio-proxy
$ oc describe pod customer-77dc47d7f8-hbxcn 
...
Containers:
  customer:
    Container ID:   cri-o://bb459fef3e4080f703d83c61ff88c56c2ee2c5c424bab6071e2cd0f3a149b7a6
    Image:          quay.io/rhdevelopers/istio-tutorial-customer:v1.1
    Image ID:       quay.io/rhdevelopers/istio-tutorial-customer@sha256:d1b0054dc21406b6b5fc172e8ffd35cc4f447550e26cbafdc8f6a1f7d9184661
    Ports:          8080/TCP, 8778/TCP, 9779/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Sun, 12 Jan 2020 18:36:23 +0800
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Sun, 12 Jan 2020 14:32:19 +0800
      Finished:     Sun, 12 Jan 2020 18:36:22 +0800
    Ready:          True
    Restart Count:  2
    Limits:
      cpu:     500m
      memory:  40Mi
    Requests:
      cpu:      200m
      memory:   20Mi
    Liveness:   exec [curl localhost:8080/health/live] delay=5s timeout=1s period=4s #success=1 #failure=3
    Readiness:  exec [curl localhost:8080/health/ready] delay=6s timeout=1s period=5s #success=1 #failure=3
    Environment:
      JAVA_OPTIONS:  -Xms15m -Xmx15m -Xmn15m
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qt9vl (ro)
  istio-proxy:
    Container ID:  cri-o://41803682d3d2d6828e4077a3e6e3e338d886025dfa030fc7d7f02229cca88ad6
    Image:         registry.redhat.io/openshift-service-mesh/proxyv2-rhel8:1.0.3
    Image ID:      registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:7f01dec612f36a48cd548a81f8f47a54b9f1b1c76366e40aefb56abe39cf167e
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      customer.$(POD_NAMESPACE)
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15010
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --proxyAdminPort
      15000
      --concurrency
      2
      --controlPlaneAuthPolicy
      NONE
      --statusPort
      15020
      --applicationPorts
      8080,8778,9779
    State:          Running
      Started:      Sun, 12 Jan 2020 14:25:44 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
    Environment:
      POD_NAME:                      customer-77dc47d7f8-hbxcn (v1:metadata.name)
      POD_NAMESPACE:                 tutorial (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           customer-77dc47d7f8-hbxcn (v1:metadata.name)
      ISTIO_META_CONFIG_NAMESPACE:   tutorial (v1:metadata.namespace)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_METAJSON_ANNOTATIONS:    {"openshift.io/scc":"restricted","sidecar.istio.io/inject":"true"}
      ISTIO_METAJSON_LABELS:         {"app":"customer","pod-template-hash":"77dc47d7f8","version":"v1"}
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qt9vl (ro)
...
  1. 为了访问Customer微服务,还要创建Gateway和VirtualService对象。可以查看customer/kubernetes/Gateway.yml文件中定义的Gateway(gw)和VirtualService(vs)对象,其中名为customer-gateway的VirtualService包括了一个名为customer-gateway的Gateway。customer-gateway的Gateway监听在80端口,当名为customer的VirtualService收到对于“/customer"路径请求后就发给名为customer的service,该service的监听端口是8080。
    VirtualService对象:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService                    
metadata:                               
  name: customer-vs     # VirtualService name                  
spec:                                   
  hosts:                                
  - "*"                                 
  gateways:                             
  - customer-gw        # gateway name           
  http:                                 
  - match:                              
    - uri:                              
        exact: /customer-ms                        
    rewrite:
      uri: / 				# 将"/customer"改写为"/"
    route:                              
    - destination:                      
        host: customer        # service url, can long service url or short service url.          
        port:                           
          number: 8080        # service port          

Gateway对象:

apiVersion: networking.istio.io/v1alpha3                 
kind: Gateway                                            
metadata:                                                
  name: customer-gw                                
spec:                                                    
  selector:                                              
    istio: ingressgateway # 通过 istio=ingressgateway 的Label定位于ServiceMeshControlPlane,然后运行在那个ServiceMeshControlPlane.
  servers:                                               
  - port:                                                
      number: 80                                         
      name: http                                         
      protocol: HTTP                                     
    hosts:                                               
    - "*"                                                
  1. 执行命令创建Gateway和VirtualService对象,然后查看它们的状态。
    注意:“istio-io”代表所有和网络相关的对象,包括Gateway、VirtualService、DestinationRule等对象。
$ oc apply -f customer/kubernetes/Gateway.yml -n tutorial
virtualservice.networking.istio.io/customer-vs created
gateway.networking.istio.io/customer-gw created
$ oc get istio-io
NAME                                             GATEWAYS        HOSTS   AGE
virtualservice.networking.istio.io/customer-vs   [customer-gw]   [*]     82
NAME                                      AGE
gateway.networking.istio.io/customer-gw   93m
  1. 通过绑定到名为istio-ingressgateway路由的Gateway入口发起访问,返回结果显示微服务customer依次调用了preference和recommendation微服务。可以看到调用计数器会增加,且“67976848-4l4s7”为微服务运行pod的id。
$ export INGRESS_GATEWAY=$(oc get route istio-ingressgateway -n istio-system -o 'jsonpath={.spec.host}')
$ ./scripts/run.sh $INGRESS_GATEWAY/customer
customer => preference => recommendation v1 from '67976848-4l4s7': 1
customer => preference => recommendation v1 from '67976848-4l4s7': 2
customer => preference => recommendation v1 from '67976848-4l4s7': 3

至此,我们就在OpenShift 4的Serivice Mesh环境中部署好了三个微服务,并且已经可以从外部访问到它们了。

发布了54 篇原创文章 · 获赞 0 · 访问量 1037

猜你喜欢

转载自blog.csdn.net/weixin_43902588/article/details/103963528
今日推荐