Deployment
and DaemonSet
these two API
objects, they are both online businesses, but deploy applications with different strategies:
Deployment
Create any number of instances;DaemonSet
Create an instance for each node;
1. Service concept
Service
It is the load balancing mechanism inside the cluster, which is used to solve the key problem of service discovery.
The life cycle in Kubernetes
the cluster Pod
is relatively "short-lived". Although Deployment
and DaemonSet
can maintain the stability of the Pod
overall number, it is inevitable that during operation, there will be Pod
destruction and reconstruction, which will cause Pod
the collection to be in a dynamic change.
This kind of "dynamic stability" is very fatal to the current popular microservice architecture. Just imagine, Pod
the IP
address of the background is always changing, how should the client access it? If you don't deal with this problem well, it's worthless to manage it well .Deployment
DaemonSet
Pod
In fact, this problem is not difficult. The industry has long had a solution to such an "unstable" back-end service, that is, "load balancing". Typical applications include LVS
, Nginx
etc. They add a "middle layer" between the front-end and the back-end to shield the back-end from changes and provide a stable service for the front-end.
However LVS
, Nginx
after all, it is not a cloud-native technology, Kubernetes
so according to this idea, a new API
object is defined: Service
.
Service
Its working principle is LVS
similar Nginx
to that of . Kubernetes
It will assign a static IP
address , and then it will automatically manage and maintain Pod
the set of dynamic changes later. When the client accesses Service
, it will forward the traffic to a later one according to a certain strategy Pod
.
Service
The technology is used here iptables
. kube-proxy
The component automatically maintains iptables
the rules. Customers no longer care Pod
about the specific address of the. As long as they access Service
the fixed IP
address of the , the request Service
will be forwarded to the multiple managed by it according to iptables
the rules Pod
. This is a typical load balancing architecture.
However, Service
not only can be used iptables
to achieve load balancing, it also has two other implementation techniques: poorer performance userspace
and better performance ipvs
, but these are all low-level details, we don't need to pay attention to them.
2. Use YAML to describe Service
We can still use the command kubectl api-resources
to view its basic information, and we can know its abbreviation is svc
, apiVersion
yes v1
. Note that this means that it is the Pod
same as , belongs Kubernetes
to the core object of , and is not related to business applications, and Job
is Deployment
different from .
$ kubectl api-resources | grep services
services svc v1 true Service
It is easy to Service
write YAML
the file header of
apiVersion: v1
kind: Service
metadata:
name: xxx-svc
We can use another command kubectl expose
to automatically create a templateService
for us, maybe I think it can better express the meaning of "exposing" the service address.YAML
Kubernetes
expose
Service
Because the service is provided Kubernetes
in andPod
can be deployed with objects, it supports creating services from various objects, including , , and .Pod
Deployment/DaemonSet
kubectl expose
Pod
Deployment
DaemonSet
When using kubectl expose
the command, you also need to use the parameters --port
and to specify the mapping port and container port --target-port
respectively , and Service
the own IP
address and the addressPod
of the backend can be automatically generated. The usage is very similar to the command line parameters of , but it is a little more troublesome.IP
Docker
-p
For example, if we want to generate for the previous ngx-dep
object Service
, the command would be written like this:
export out="--dry-run=client -o yaml"
kubectl expose deploy ngx-dep --port=80 --target-port=80 $out
The resulting is Service YAML
probably something like this:
apiVersion: v1
kind: Service
metadata:
name: ngx-svc
spec:
selector:
app: ngx-dep
ports:
- port: 80
targetPort: 80
protocol: TCP
spec
There are only two key fields in , selector
and ports
.
selector
Deployment/DaemonSet
It has the same function as in and is used to filter out those to be proxiedPod
. Because we specified that we want a proxy ,Deployment
the tag is automatically filled in for us, and all the objects deployed by this object . Although the labeling mechanism of is very simple, it is very powerful and effective, and it is easy to associate with .Kubernetes
ngx-dep
Deployment
Pod
Kubernetes
Deployment
Pod
ports
The three fields inside represent the external port, internal port and the protocol used. Here, both internal and external80
ports protocol isTCP
.
In order to let you see Service
clearly Pod
the relationship between and it references, I drew these two YAML
objects in the picture below, and the key thing to pay attention to selector
is the relationship withtargetPort
:Pod
3. Using Service in K8s
First, we create a configuration fragment thatConfigMap
defines a , which will output basic information such as the server address, host name, and requested . The content is as follows:Nginx
URI
ngx-conf.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: ngx-conf
data:
default.conf: |
server {
listen 80;
location / {
default_type text/plain;
return 200
'srv : $server_addr:$server_port\nhost: $hostname\nuri : $request_method $host $request_uri\ndate: $time_iso8601\n';
}
}
create ConfigMap
object
kubectl apply -f ngx-conf.yml
Then Deployment
we template.volumes
define the storage volume in the , and then volumeMounts
use to load the configuration file Nginx
into the container. ngx-dep.yml
The content is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
spec:
replicas: 2
selector:
matchLabels:
app: ngx-dep
template:
metadata:
labels:
app: ngx-dep
spec:
volumes:
- name: ngx-conf-vol
configMap:
name: ngx-conf
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: ngx-conf-vol
create deployment
object
kubectl apply -f ngx-dep.yml
Check whether the deployed configmap
and deployment
objects are normal:
as before, we want to generate for this ngx-dep
object Service
, so the command should be written as follows:
export out="--dry-run=client -o yaml"
kubectl expose deploy ngx-dep --port=80 --target-port=80 $out
The generated is Service YAML
probably like this, we save the file as svc.yml
:
apiVersion: v1
kind: Service
metadata:
name: ngx-svc
spec:
selector:
app: ngx-dep
ports:
- port: 80
targetPort: 80
protocol: TCP
In this way, we can create Service
objects , using kubectl apply
:
kubectl apply -f svc.yml
After creation, you kubectl get
can :
It can be seen that an address "10.108.141.157" is automatically assigned Kubernetes
to Service
the object IP
, and this address segment is independent of Pod
the address segment (10.10.xx.xx in the previous talk). Moreover, the address of Service
the object IP
has another characteristic, it is a "virtual address", there is no entity, and it can only be used to forward traffic.
To see which backends are Service
proxied Pod
, you can use kubectl describe
the command :
kubectl describe svc ngx-svc
The screenshot shows that two Service
objects are managed endpoint
, namely "10.10.1.2:80" and "10.10.1.3:80". The preliminary judgment is consistent with the definition of Service
and Deployment
, so are these two IP
addresses Nginx Pod
the actual addresses of ?
Let's take kubectl get pod
a look and add parameters -o wide
:
kubectl get pod -o wide
Pod
Comparing the address of with Service
the information of , we can verify Service
that indeed IP
proxies Pod
two dynamic IP
addresses of with one static address.
4. Test the load balancing effect of Service
Because the addresses of and Service
are all the internal network segments of the cluster, we need to use to enter the internal network (or log in to the cluster nodes), and then use tools such as to access :Pod
IP
Kubernetes
kubectl exec
Pod
ssh
curl
Service
kubectl exec -ti ngx-dep-6796688696-v97ns -- sh
Pod
In , use the address ofcurl
to access , you will see that it forwards the data to the backend , and the output information will show which responded to the request, which means that has indeed completed the load balancing task for .Service
IP
Pod
Pod
Service
Pod
Let's try to delete one more Pod
to see Service
if it will update Pod
the information of the backend to realize automatic service discovery:
Since Pod
is Deployment
managed by the object, it will be automatically rebuilt after deletion, and Service
will controller-manager
monitor the changes Pod
of , so it will immediately update IP
the address of its proxy. From the screenshot, you can see that an IP
address "10.10.1.2" has disappeared and replaced with a new one "10.10.1.4", which is newly created Pod
.
You can also try again using ping
to test the addressService
of :IP
You will find ping
that it doesn't work , because Service
your IP
address is "virtual" and is only used to forward traffic, so ping
you can't get a response packet, and it fails.
4. Use the Service as a domain name
Service
IP
The address of the object is static and stable, which is really important in microservices, but IP
the address in digital form is still inconvenient to use. At this Kubernetes
time DNS
the plug-in comes in handy, it can Service
create easy-to-write and easy-to-remember domain names, making it Service
easier to use.
namespace
Shorthand for is ns
, you can use the command kubectl get ns
to see which namespaces are in the current cluster, that is API
, which groups of objects:
Kubernetes
There is a default namespace called , and objects will live in this namespace default
unless explicitly specified . And other namespaces have their own purposes, for example , contains core components such as , and so on .API
default
kube-system
apiserver
etcd
Pod
Because DNS
it is a hierarchical structure, in order to avoid conflicts caused by too many domain names, Kubernetes
the namespace is taken as part of the domain name, reducing the possibility of duplicate names.
Service
The full form of the domain name of the object is 对象. 名字空间.svc.cluster.local
, but in many cases, the latter part can also be omitted 对象. 名字空间
. 对象名
It is enough to directly write or even , and the name space where the object is located will be used by default (such as here default
).
Now let's test the usage of DNS
the domain name , or first kubectl exec
enter Pod
, and then use curl
to access ngx-svc
, ngx-svc.default
etc. domain names:
It can be seen that now we no longer care Service
about IP
the address of the object, we only need to know its name, and we can DNS
use the method to access the backend service.
By the way, a domain name Kubernetes
is also Pod
assigned in the form of IP 地址. 名字空间.pod.cluster.local
, but you need IP
to .
change the in the address to -
. For example, the address 10.10.1.87 is the corresponding domain name 10-10-1-87.default.pod
.
5. Let the Service expose the service to the outside world
Because Service
it is a load balancing technology, it can not only manage Kubernetes
the services inside the cluster, but also take on the important task of exposing services to the outside of the cluster.
Service
The object has a key field type
that indicates what type of load balancing Service
it is . The usages we have seen above are for load balancing Pod
within , so the value of this field is the default ClusterIP
, and Service
the static IP
address can only be accessed within the cluster.
In addition to ClusterIP
, Service
three other types are supported, namely ExternalName
, LoadBalancer
, NodePort
. However, the first two types are generally provided by cloud service providers, which are not used in our experimental environment, so we will focus on NodePort
this .
If we add parameters kubectl expose
when --type=NodePort
, or add fields YAML
in type:NodePort
, then Service
in addition Pod
to load balancing the backend, an independent port will be created on each node in the cluster, and this port will be provided externally service, which is where NodePort
the name comes from.
First execute the command to delete the previously created service
:
kubectl delete svc ngx-svc
Let's Service
modify YAML
the file of the to add the field type
:
apiVersion: v1
...
spec:
...
type: NodePort
Then create the object and check its status:
You will TYPE
see ClusterIP
that has changed from before NodePort
, and the port information in PORT
the column is also different. In addition to the port 80 used inside the cluster, there is an additional port 31356, which is the dedicated mapped port createdKubernetes
for on the node .Service
Because this port number belongs to the node, it can be directly accessed from the outside, so now we can directly Pod
use IP
the address of any node outside the cluster without logging in to the cluster node or entering the inside, and then we can access Service
and the backend services it proxies.
For example, my current server is 172.16.19.54. On this host, use to curl
access Kubernetes
two nodes 10.116.62.162 and 10.116.62.54 of the cluster, and you can get the response data Nginx Pod
of :
I drew the corresponding relationship of NodePort
and Service
, Deployment
and you should be able to better understand its working principle after reading it:
Image source: https://time.geekbang.org/column/article/536829
However, it also has some disadvantages, as follows:
-
It has a limited number of ports.
Kubernetes
In order to avoid port conflicts, the default is only randomly assigned in the range of "30000~32767", there are only more than 2000, and none of them are standard port numbers, which is not enough for systems with a large number of business applications. -
It will open a port on each node, and then use
kube-proxy
routing to the real backendService
, which brings some network communication costs for large clusters with many computing nodes, which is not particularly economical. -
It requires
IP
the address , which is not feasible in many cases. For security, it is necessary to set up a reverse proxy outside the cluster, which increases the complexity of the solution.
Despite these shortcomings, it is NodePort
still Kubernetes
a simple and easy way to provide services externally, and we can only use it until other better ways appear.
6. Summary
Pod
The life cycle of is very short and will be constantly created and destroyed, so it needs to be usedService
to achieve load balancing. It isKubernetes
assigned a fixedIP
address by and can shield the backend fromPod
changes.Service
The object uses the same fieldsDeployment
as , to select the backend to be proxied , which is a loosely coupled relationship.DaemonSet
selector
Pod
- Based on
DNS
the plug-in , we can access it as a domain nameService
, which is more convenient than a staticIP
address . - A namespace is a way
Kubernetes
to isolate objects and realize logical object grouping,Service
and the domain name of the domain name contains the namespace limitation. Service
The default type isClusterIP
that it can only be accessed within the cluster. If it is changedNodePort
, a random port number will be opened on the node so that the outside world can also access internal services.
If the mapped port of Service
the object is the same as the target port, for example, both are 80, kubectl expose
you can also omit it when using the command --target-port
, and only use --port
this parameter, which will be more convenient when creating a template.
In fact, service
is not directly managed Pod
, but the object representing IP
the address Endpoint
, but we generally don't use it directly Endpoint
, unless it is to check for errors.
NodePort
The default port range for can 30000~32767
also apiserver
be changed via configuration, but this increases the risk of node port conflicts.
Problem: After the Service proxies the POD, use exec to enter the POD, and then use curl to access it. I would like to ask: Why is it entering the POD, and the Service acts as a proxy for the POD? Logically, it should be entering the Service.
Answer: Because the domain name and IP address are all in Kubernetes, they cannot be accessed by the outside world, so you can just enter the pod. Service is a virtual address and cannot be accessed. Only pods are entities.