If we need to have a special set of route processing exposed to internal applications, you can use the function Route partition, OpenShift 4.3 in Route partition function has been enhanced to support partition partition-based namespaces based on the Route Label.
Here we specifically try it in practice.
1. Create a set of internal Router
First modify their own node, do some groups, such as infra, infra1
[root@clientvm 0 ~]# oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-138-140.us-east-2.compute.internal Ready master 14d v1.16.2 ip-10-0-141-38.us-east-2.compute.internal Ready infra,worker 14d v1.16.2 ip-10-0-144-175.us-east-2.compute.internal Ready master 14d v1.16.2 ip-10-0-152-254.us-east-2.compute.internal Ready infra1,worker 14d v1.16.2 ip-10-0-165-83.us-east-2.compute.internal Ready infra,worker 14d v1.16.2 ip-10-0-172-187.us-east-2.compute.internal Ready master 14d v1.16.2
After the installation is complete OpenShift 4 has a default Ingress Controller, you can see the default router by the following command.
[root@clientvm 0 ~]# oc get ingresscontroller -n openshift-ingress-operator default -o yaml apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: "2020-02-17T14:05:36Z" finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 2 name: default namespace: openshift-ingress-operator resourceVersion: "286852" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 91cb30a9-518e-11ea-9402-02390bbc2fc6 spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" replicas: 2 status: availableReplicas: 2 conditions: - lastTransitionTime: "2020-02-17T14:05:36Z" reason: Valid status: "True" type: Admitted - lastTransitionTime: "2020-02-18T03:20:38Z" status: "True" type: Available - lastTransitionTime: "2020-02-17T14:05:40Z" message: The endpoint publishing strategy supports a managed load balancer reason: WantedByEndpointPublishingStrategy status: "True" type: LoadBalancerManaged - lastTransitionTime: "2020-02-17T14:05:43Z" message: The LoadBalancer service is provisioned reason: LoadBalancerProvisioned status: "True" type: LoadBalancerReady - lastTransitionTime: "2020-02-17T14:05:40Z" message: DNS management is supported and zones are specified in the cluster DNS config. reason: Normal status: "True" type: DNSManaged - lastTransitionTime: "2020-02-17T14:05:47Z" message: The record is provisioned in all reported zones. reason: NoFailedZones status: "True" type:DNSReady - lastTransitionTime: "2020-02-18T03:20:38Z" status: "False" type: Degraded - lastTransitionTime: "2020-02-18T03:20:38Z" message: The deployment has Available status condition set to True reason: DeploymentAvailable status: "False" type: DeploymentDegraded domain: apps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService observedGeneration: 2 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default tlsProfile: ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12
We first establish a set of internal Router
[root@clientvm 0 ~]# cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: replicas: 1 domain: internalapps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: type: LoadBalancerService nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra1: "" routeSelector: matchLabels: type: internal status: {} kind: List metadata: resourceVersion: "" selfLink: ""
oc create -f router-internal.yaml
After the completion of the establishment view
[root@clientvm 0 ~]# oc get ingresscontroller -n openshift-ingress-operator NAME AGE default 14d internal 23m
[root@clientvm 0 ~]# oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.147.15 a92dd1252518e11ea940202390bbc2fc-1093196650.us-east-2.elb.amazonaws.com 80:31681/TCP,443:31998/TCP 14d router-internal LoadBalancer 172.30.234.210 af3b1fc6df9f44e69b656426ba1497dc-1902918297.us-east-2.elb.amazonaws.com 80:31499/TCP,443:32125/TCP 23m router-internal-default ClusterIP 172.30.205.36 <none> 80/TCP,443/TCP,1936/TCP 14d router-internal-internal ClusterIP 172.30.187.205 <none> 80/TCP,443/TCP,1936/TCP 23m
It is worth noting that we are aws public cloud environment to build, so exposed is LoadBalancerService
If we are to establish their own internal cloud environment, you should not need a shaded period.
View information about the router
[root@clientvm 0 ~]# oc get pod -n openshift-ingress -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-6784d69459-db5rt 1/1 Running 0 14d 10.129.2.15 ip-10-0-141-38.us-east-2.compute.internal <none> <none> router-default-6784d69459-xrtgc 1/1 Running 0 14d 10.131.0.4 ip-10-0-165-83.us-east-2.compute.internal <none> <none> router-internal-6c896bb666-mckr4 1/1 Running 0 26m 10.128.2.82 ip-10-0-152-254.us-east-2.compute.internal <none> <none>
2. Modify the routing application
Note that a shaded host URL and specify the type on the label: internal
[root@clientvm 0 ~]# oc get route tomcat -oyaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2020-03-03T08:08:18Z" labels: app: tomcat app.kubernetes.io/component: tomcat app.kubernetes.io/instance: tomcat app.kubernetes.io/name: "" app.kubernetes.io/part-of: tomcat-app app.openshift.io/runtime: "" type: internal name: tomcat namespace: myproject resourceVersion: "5811320" selfLink: /apis/route.openshift.io/v1/namespaces/myproject/routes/tomcat uid: a94f136b-3292-4d8d-981c-923bf5d8a3a0 spec: host: tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com port: targetPort: 8080-tcp to: kind: Service name: tomcat weight: 100 wildcardPolicy: None
From describe the information point of view, the route has been exposed in the default routing and internal.
[root@clientvm 0 ~]# oc describe route tomcat Name: tomcat Namespace: myproject Created: 22 minutes ago Labels: app=tomcat app.kubernetes.io/component=tomcat app.kubernetes.io/instance=tomcat app.kubernetes.io/name= app.kubernetes.io/part-of=tomcat-app app.openshift.io/runtime= type=internal Annotations: openshift.io/host.generated=true Requested Host: tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com exposed on router default (host apps.cluster-6277.sandbox140.opentlc.com) 22 minutes ago exposed on router internal (host internalapps.cluster-6277.sandbox140.opentlc.com) 20 minutes ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: tomcat Weight: 100 (100%) Endpoints: 10.128.2.76:8080
The reason exposed on default, because default router settings and to set RouteSelector, so if you need only exposed internal routing, you need to modify the default, adding Label logo RouteSelector, but this effect is brought about, after every once established route will need to specify the label, to select a specific route to be mounted on a specific set of router.
In a public cloud environment, we can directly access http://tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com/
3. Route Partition-based namespaces
These are routed based on the partition of Route Label, if it is based on the partition of the namespace, we continue to modify route-internal.yaml file
[root@clientvm 0 ~]# cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: replicas: 1 domain: internalapps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: type: LoadBalancerService namespaceSelector: matchLabels: environment: app nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra1: "" status: {} kind: List metadata: resourceVersion: "" selfLink: ""
Then we tag the project
oc label ns myproject environment=app
Modify the route tomcat, delete the label, and then delete the test.
It is worth noting that if both namespaceSelector routing packets have RouteSelector, it shows that two conditions are required to take effect.