First, nonsense: the first about a k8s important concept, I think this concept is the core concept of the entire cluster to achieve micro k8s services
The concept: Service
Service defines a logical collection strategy Pod and access to the collection, abstract real services. Service provides a unified access service entrance and service agent and discovery mechanism, users do not need to know how to run a background Pod. Only need to run the same set of services pod into a pool service, k8s cluster automatically the entire cluster single ip and port number (the port number in yaml own definition file) allocated to this service, a service that defines how to access the pod , as between the individual fixed IP address and its corresponding DNS name of the relationship.
Service in fact, we often mention the micro-service architecture in a "micro-service", each micro-serving back-end load balancing multiple business pod, a version control (deployment) the number of pod control to ensure high reliability and redundancy services sex. Our system provides a final plurality of different business capabilities independently of each other but consisting of micro-service unit, communication between services through TCP / IP (k8s cluster allocation entire cluster only), thereby forming a powerful and flexible we the elastic network, with a strong distributed capabilities, flexible scalability, fault tolerance;
Once you understand the concept of service, we went to look at the specific back-end service is how to achieve?
Requirements: One problem is that now my traffic distribution across multiple Pod, so if I would not die a Pod business finished, of course, some people will say Pod die no problem ah, K8S own mechanisms and the Controller Deployment dynamically create and destroy Pod to ensure the overall stability of the application, and that this time there will be a problem, that is generated for each Pod IP is dynamic, so that it restarted the IP of our foreign visit would not be changed , do not worry, let's solve this problem.
You can solve the problems as encountered by Service
Two, Service is the core concept kubernetes,, it may be provided as a single entry address a group of containers having the same function application by creating a Service, and the load distribution request to each of the rear end of the container application. As for how to distribute the load, we do not have to worry about, k8s get their own.
In simple terms all the Pod Service is a unified into a group, and then outside to provide a fixed IP, in particular to introduce the Label tab to set before what Pod, can be assumed that a pod died, a copy of the controller generate a pod, which is a pod ip certainly become, but we do not care how much you pod ip, we only know that service ip has not changed enough, because the new pod would have added to my service in the, between the various service communication service through the communication unique to ip.
All of the above operations, realize why the so-called '' micro-services "For example, we know everything;
a simple architecture, nginx do reverse proxy, back-end web-eight tomcat instance, how to achieve the cluster in k8s it, I just run 8 pod, each pod running tomcat container, service pooling these 8 pod, and a copy will automatically control the number of dynamic control pod, he created less than eight to eight, the excess will be 8 automatically deleted to eight and we access service ip, kube-proxy k8s will be automatically back-end load balancing of 8 pod, inside which a service cluster access, you need only one service ip and port number can be; the same token, redis cluster, can also be achieved, each service corresponds to microchip services, communications between the respective flexible service.
Ado, directly demonstrate the practical operation
Third, create a copy of the control resource named nginx-deployment, run two pod, each pod nginx runs a container, the container port 80 open.
[root@master yaml]# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx 就是说哪些Pod被Service池化是根据Label标签来的,此行nginx字样,后面我们创建Service会用到
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
kubectl create -f nginx.yaml
deployment.apps/nginx-deployment created
Creating Service pool of two nginx pod just
[root@master yaml]# cat nginx-svc.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 8080 这个资源(svc)开放的端口
targetPort: 80
selector选择之前Label标签为nginx的Pod作为Service池化的对象,
最后说的是把Service的8080端口映射到Pod的80端口。
kubectl apply -f nginx-svc.yml
service/nginx-svc created
Once created nginx-svc will be assigned to a cluster-ip, ip access through the back-end business nginx.
That it is how to achieve it? The answer is through NAT and port translation iptables achieve, they can go to research
那这时候有人说了,还是不能外网访问啊,别急下面我们来进行外网地址访问设置。在实际生产环境中,对Service的访问可能会有两种来源:Kubernetes集群内部的程序(Pod)和Kubernetes集群外部,为了满足上述的场景,Kubernetes service有以下三种类型:
1.ClusterIP:提供一个集群内部的虚拟IP(与Pod不在同一网段),以供集群内部的pod之间通信使用。
2.NodePort:在每个Node上打开一个随机端口并且每个Node的端口都是一样的,通过:NodePort的方式Kubernetes集群外部的程序可以访问Service。
3.LoadBalancer:利用Cloud Provider特有的Load Balancer对外提供服务,Cloud Provider负责将Load Balancer的流量导向Service
本篇文章我着重讲下第二种方式,也就是NodePort方式,修改nginx-svc.yml文件,也就是刚才前面创建的Service文件,相信细心的同学会发现在之前截图的时候已经做好了NodePort,因为我的环境已经配置好了所以这样就不在截图了,配置很简单,可以网上看下截图,就是添加一个type:NodePort,然后重新创建下nginx-svc,命令的话和创建的命令一样,我们来看看创建完事的结果。