1 Kubectl
is a apiserver command line interface for running commands against Kubernetes cluster.
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner): create Create a resource from a file or from stdin. expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Documentation of resources get Display one or many resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster info top Display Resource (CPU/Memory/Storage) usage. cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers. auth Inspect authorization Advanced Commands: diff Diff live version against would-be applied version apply Apply a configuration to a resource by filename or stdin patch Update field(s) of a resource using strategic merge patch replace Replace a resource by filename or stdin wait Experimental: Wait for a specific condition on one or many resources. convert Convert config files between different API versions Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash or zsh) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins. version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).
Get Node Info:
kubectl describe node k8smaster
root@k8smaster ~]# kubectl describe node k8smaster Name: k8smaster Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8smaster node-role.kubernetes.io/master= Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"76:80:68:34:94:6c"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 172.16.0.11 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 02 Jan 2019 13:30:57 +0100 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:57:40 +0100 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.16.0.11 Hostname: k8smaster Capacity: cpu: 2 ephemeral-storage: 17394Mi hugepages-2Mi: 0 memory: 3861508Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16415037823 hugepages-2Mi: 0 memory: 3759108Ki pods: 110 System Info: Machine ID: 8d2b3fec09894a6eb6e69d45ce7a9996 System UUID: 34014D56-A1A0-0F33-A35B-56A3947191DF Boot ID: e0019567-d852-4000-991c-51e2a1061863 Kernel Version: 3.10.0-957.1.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.9.0 Kubelet Version: v1.11.1 Kube-Proxy Version: v1.11.1 PodCIDR: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-78fcdf6894-5v9g9 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 23h kube-system coredns-78fcdf6894-lpwfw 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 23h kube-system etcd-k8smaster 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-apiserver-k8smaster 250m (12%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-controller-manager-k8smaster 200m (10%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-flannel-ds-amd64-n5j7l 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 22h kube-system kube-proxy-rjssr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h kube-system kube-scheduler-k8smaster 100m (5%) 0 (0%) 0 (0%) 0 (0%) 22h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%) 100m (5%) memory 190Mi (5%) 390Mi (10%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 4m58s kubelet, k8smaster Starting kubelet. Normal NodeAllocatableEnforced 4m58s kubelet, k8smaster Updated Node Allocatable limit across pods Normal NodeHasSufficientPID 4m57s (x5 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientPID Normal NodeHasSufficientDisk 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasNoDiskPressure Normal Starting 4m21s kube-proxy, k8smaster Starting kube-proxy.
2 Deploy a new Pod/ Create a Deployment object
deploy a new pod, which can be accessed inside the cluster (pulled from docker hub. you can also pull it from local docker regirstry)
######################################################
# Create and run a particular image, possibly replicated. #
# Creates a deployment or job to manage the created container(s).#
######################################################
You need a deployment object like a replication controller or replicaset - that needs to keep the replicas (pods) alive.
kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5
[root@k8smaster ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 5 5 5 5 2h [root@k8smaster ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-79976cbb47-sg2t9 1/1 Running 1 2h 10.244.1.4 k8snode1 nginx-79976cbb47-tl5r7 1/1 Running 1 2h 10.244.2.7 k8snode2 nginx-79976cbb47-vkzww 1/1 Running 1 2h 10.244.2.6 k8snode2 nginx-79976cbb47-wvvtq 1/1 Running 1 2h 10.244.1.5 k8snode1 nginx-79976cbb47-x4wjt 1/1 Running 1 2h 10.244.2.5 k8snode2
[root@k8smaster ~]# curl 10.244.1.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Use ifconfig to validate pods ip address:
[root@k8smaster ~]# ifconfig cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::3cd7:71ff:fee7:b4d prefixlen 64 scopeid 0x20<link> ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet) RX packets 2778 bytes 180669 (176.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2841 bytes 1052175 (1.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:8b:97:c3:4b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.11 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::20c:29ff:fe71:91df prefixlen 64 scopeid 0x20<link> ether 00:0c:29:71:91:df txqueuelen 1000 (Ethernet) RX packets 6403 bytes 688725 (672.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7824 bytes 7876155 (7.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::7480:68ff:fe34:946c prefixlen 64 scopeid 0x20<link> ether 76:80:68:34:94:6c txqueuelen 0 (Ethernet) RX packets 5 bytes 1118 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 446 (446.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
...
We can see, all pods will be assigned to range 10.244.0.1/24
3 Expose a new Service for internal usage/ Create a Service object
You need to have the service object because the pods
from the deployment object can be killed, scaled up and down, and you can't rely on their IP addresses because they will not be persistent.
So you need an object like a service, that gives those pods
a stable IP.
kubectl expose a stable IP for deployment objects "nginx" (replicas=5), which is created by kubectl run nginx see abov...
Possible resources include (case insensitive): pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs) Examples: # Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000. kubectl expose rc nginx --port=80 --target-port=8000
port -> service port for the connection from outside
target-port -> container port for inernal port forwarding
Now we simulate to create a new service with name "nginx-service" wich keyword "--dry-run=true"
kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP --dry-run=true
deployment nginx -> this nginx is already appeared in local docker registry see cmd (kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5)
and is created by using "kubectl run " command
If this "nginx" object not created yet, it will show error messege below (because k8s doesn't know, what is nginx-deploy):
[root@k8smaster ~]# kubectl expose deployment nginx-deploy --name=mynginx --port=80 --target-port=80 --protocol=TCP --dry-run=true Error from server (NotFound): deployments.extensions "nginx-deploy" not found
But nginx1.14-alpine is already there -> you can validate it on node1 and node2 with docker images and kubectl get deployment:
[root@k8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx 1.14-alpine c5b6f731fbc0 13 days ago 17.7MB k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 5 months ago 97.8MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
Now we create a real k8s service for deployment "nginx":
[root@k8smaster ~]# kubectl get deployment -o wide --show-labels NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS nginx 5 5 5 5 1m nginx nginx:1.14-alpine run=nginx run=nginx
[root@k8smaster ~]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE LABELS nginx-79976cbb47-2xrhk 1/1 Running 0 1m 10.244.1.7 k8snode1 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-8dqnk 1/1 Running 0 1m 10.244.2.10 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-gprlc 1/1 Running 0 1m 10.244.2.9 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-p247g 1/1 Running 0 1m 10.244.2.8 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-ppbqv 1/1 Running 0 1m 10.244.1.6 k8snode1 pod-template-hash=3553276603,run=nginx
[root@k8smaster ~]# kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP
service/nginx-service exposed
[root@k8smaster ~]# kubectl get svc -o wide --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d <none> component=apiserver,provider=kubernetes
nginx-service ClusterIP 10.109.139.168 <none> 80/TCP 40s run=nginx run=nginx
[root@k8smaster ~]# kubectl describe svc nginx
Name: nginx-service
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP: 10.109.139.168
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.6:80,10.244.1.7:80,10.244.2.10:80 + 2 more...
Session Affinity: None
Events: <none>
[root@k8smaster ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::3cd7:71ff:fee7:b4d prefixlen 64 scopeid 0x20<link>
ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet)
RX packets 22161 bytes 1423430 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 22547 bytes 8296119 (7.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:8b:97:c3:4b txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.0.11 netmask 255.255.255.0 broadcast 172.16.0.255
inet6 fe80::20c:29ff:fe71:91df prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:71:91:df txqueuelen 1000 (Ethernet)
RX packets 41975 bytes 5478270 (5.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53008 bytes 54186252 (51.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::7480:68ff:fe34:946c prefixlen 64 scopeid 0x20<link>
ether 76:80:68:34:94:6c txqueuelen 0 (Ethernet)
RX packets 10 bytes 2236 (2.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 894 (894.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
...
Remember cluster initialization in the last capitel:
kubeadm init --ignore-preflight-errors all --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
We defined service ip range in --service-cidr=10.96.0.0/12
That's why we got :
[root@k8smaster ~]# kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d component=apiserver,provider=kubernetes
nginx-service ClusterIP 10.109.139.168 <none> 80/TCP 7m run=nginx
* we didn't specifiy external-ip
4 Scale up/down deployment object
The scope of deployment is to scale resources, that's kind of the point of using kubernetes
You can do it by using "kubectl scale" command:
[root@k8smaster ~]# kubectl get deployment -o wide --show-labels NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS nginx 5 5 5 5 14m nginx nginx:1.14-alpine run=nginx run=nginx [root@k8smaster ~]# kubectl scale --replicas=3 deployment nginx deployment.extensions/nginx scaled [root@k8smaster ~]# kubectl get deployment -o wide --show-labels NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS nginx 3 3 3 3 15m nginx nginx:1.14-alpine run=nginx run=nginx
[root@k8smaster ~]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE LABELS
nginx-79976cbb47-8dqnk 1/1 Running 0 18m 10.244.2.10 k8snode2 pod-template-hash=3553276603,run=nginx
nginx-79976cbb47-p247g 1/1 Running 0 18m 10.244.2.8 k8snode2 pod-template-hash=3553276603,run=nginx
nginx-79976cbb47-ppbqv 1/1 Running 0 18m 10.244.1.6 k8snode1 pod-template-hash=3553276603,run=nginx
We just scale down the replicas from 5 to 3
5 Rolling Update
First set new deployment image, then with "rollout" perform rolling update (because nginx:1.15 doesn't exist, it will fail..)
[root@k8smaster ~]# kubectl set image deployment nginx nginx=nginx:1.15-alpine/nginx:v2 deployment.extensions/nginx image updated
[root@k8smaster ~]# kubectl rollout status deployment nginx Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated...
You can use decribe command to validate update:
[root@k8smaster ~]# kubectl describe pod nginx-79976cbb47-8dqnk Name: nginx-79976cbb47-8dqnk Namespace: default Priority: 0 PriorityClassName: <none> Node: k8snode2/172.16.0.13 Start Time: Thu, 03 Jan 2019 14:34:16 +0100 Labels: pod-template-hash=3553276603 run=nginx Annotations: <none> Status: Running IP: 10.244.2.10 Controlled By: ReplicaSet/nginx-79976cbb47 Containers: nginx: Container ID: docker://151150f6350d891c6504f5edb17b03da6b213d6ad207188301ce3eab6ff5264a Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:e3f77f7f4a6bb5e7820e013fa60b96602b34f5704e796cfd94b561ae73adcf96 Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 03 Jan 2019 14:34:17 +0100 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rxs5t (ro) Conditions: Type Status Initialized True
........
See -> Image: nginx:1.14-alpine Image is still the old one, my update didn't successful..
You can also rollback to last version with
kubectl rollout undo deployment nginx
6 Iptables dump
[root@k8smaster ~]# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 27 1748 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ 167 10148 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6328 383K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ 1037 62220 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6529 395K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 1716 103K RETURN all -- * * 10.244.0.0/16 10.244.0.0/16 0 0 MASQUERADE all -- * * 10.244.0.0/16 !224.0.0.0/4 0 0 RETURN all -- * * !10.244.0.0/16 10.244.0.0/24 0 0 MASQUERADE all -- * * !10.244.0.0/16 10.244.0.0/16 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-MARK-DROP (0 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000 Chain KUBE-MARK-MASQ (12 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination Chain KUBE-POSTROUTING (1 references) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-SEP-23Y66C2VAJ3WDEMI (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 172.16.0.11 0.0.0.0/0 /* default/kubernetes:https */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:172.16.0.11:6443 Chain KUBE-SEP-CGXZZGLWTRRVTMXB (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.1.6 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.1.6:80 Chain KUBE-SEP-DA57TZEG5V5IUCZP (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.2.10 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.2.10:80 Chain KUBE-SEP-L4GNRLZIRHIXQE24 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.2.8 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.2.8:80 Chain KUBE-SEP-LBMQNJ35ID4UIQ2A (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.9 0.0.0.0/0 /* kube-system/kube-dns:dns */ 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.244.0.9:53 Chain KUBE-SEP-S7MPVVC7MGYVFSF3 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.9 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.9:53 Chain KUBE-SEP-SISP6ORRA37L3ZYK (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.8 0.0.0.0/0 /* kube-system/kube-dns:dns */ 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.244.0.8:53 Chain KUBE-SEP-XRFUWCXKVCLGWYQC (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.8 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.8:53 Chain KUBE-SERVICES (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 0 0 KUBE-MARK-MASQ udp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.109.139.168 /* default/nginx-service: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-GKN7Y2BSGW4NJTYL tcp -- * * 0.0.0.0/0 10.109.139.168 /* default/nginx-service: cluster IP */ tcp dpt:80 15 900 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-XRFUWCXKVCLGWYQC all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-S7MPVVC7MGYVFSF3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-GKN7Y2BSGW4NJTYL (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CGXZZGLWTRRVTMXB all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ statistic mode random probability 0.33332999982 0 0 KUBE-SEP-DA57TZEG5V5IUCZP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-L4GNRLZIRHIXQE24 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-23Y66C2VAJ3WDEMI all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-SISP6ORRA37L3ZYK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-LBMQNJ35ID4UIQ2A all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
6 Access from outside
You can also expose your service to cluster externl host by changing ClusterIP to NodePort
kubectl edit svc nginx-service
[root@k8smaster ~]# kubectl edit svc nginx-service
# change ClusterIP to NodePort -> save and close editing service/nginx-service edited
[root@k8smaster ~]# kubectl get svc nginx-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service NodePort 10.109.139.168 <none> 80:30200/TCP 39m
We can see PORTS 80:30200/TCP is newly changed, which means we can use {node ip address}:30200 to access pods via this service (nginx-service)
for example from client laptop use my masternode ip 172.16.0.11
bai@bai ~ curl http://172.16.0.11:30200 ✔ ⚙ 230 15:23:11 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
* with this k8s service, it has also loadbalance function
It works !