k8s of application practice -Service, Ingress

1. Background;

        Due to simplify cluster deployment k8s, the company's k8s deployment scenarios using kubeadm, and follow the online and offline dimension clustering division, there are two main reasons why you want to use kubeadm:. 1 deployment quick and easy, 2.k8s easy expansion node 3 deployment reduce the difficulty; 


2. The system environment;


system version Kernel version etcd  Remark
CentOS Linux release 7.5.1804 3.10.0-862.14.4.el7.x86_64 etcdctl version: 3.3.11





3. related component version;


kubeadmin component name Component Version Name Remark



k8s.gcr.io/kube-apiserver 



v1.13.4

kube-apiserver Kubernetes is one of the most important core components, mainly provides the following functions
  • REST API provides cluster management interfaces, including authentication and authorization, data validation and cluster state changes, etc.

  • Providing a hub (API Server queries by other modules or modify data, only the API Server only direct operation ETCD) interact and communicate data between other modules






k8s.gcr.io/kube-controller-manager 





v1.13.4

kube-scheduler is responsible for allocating the scheduling Pod to node in the cluster, it listens kube-apiserver, query Pod has not been allocated Node, and then according to the scheduling policy for those Pod distribution node (updated Pod of  NodeName field)

Scheduler needs to fully consider many factors:

  • Fair Scheduling

  • Efficient use of resources

  • QoS

  • affinity 和 anti-affinity

  • Data locality (data locality)

  • Internal load disturbance (inter-workload interference)

  • deadlines


k8s.gcr.io/kube-scheduler


v1.13.4

Controller Manager of kube-controller-manager and cloud-controller-manager composition, is Kubernetes of the brain, which by apiserver monitor the status of the entire cluster, and ensure that the cluster is in the expected operating conditions.



 k8s.gcr.io/kube-proxy



v1.13.4

Each machine runs a kube-proxy services that monitor changes in the API server and endpoint of the service, and so on through iptables to configure load balancing service (only supports TCP and UDP).

kube-proxy can be run directly on physical machines can run at a static pod or daemonset manner.

k8s.gcr.io/pause 3.1

Kubernetes for each Pod are subsidiaries of gcr.io/google_containers/pause:latest, only to take over the container network information Pod, the business container network sharing network by adding a network container. The container with the pod to create and create, delete and delete with the Pod, as the name "pause"

The container is parsed namespace operations of the pod.


k8s.gcr.io/coredns 1.2.6 DNS is one of the core functions of Kubernetes by kube-dns or CoreDNS as a cluster of essential extensions to provide naming service.


weaveworks/weave

2.5.2

Weave Net is a multi-host container network programs to support the control plane to the center of the inter-wRouter on each host through the establishment of Full Mesh TCP connections, and synchronized control information via Gossip. This eliminates the need for a centralized manner K / V Store, deployment complexity can be reduced to some extent,



4. Application of;


(1) clusters have normal operation as shown below;


image.png



2. Check the deployment configuration;



apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: nginx-dp
spec: 
  selector:
    matchLabels:
      app: nginx-dp
  replicas: 1
  template: 
    metadata: 
      labels: 
        app: nginx-dp 
    spec: 
      containers: 
        - name: nginx 
          image: nginx:alpine 
          ports: 
            - containerPort: 80


3.查看ervice 配置;


apiVersion: v1
kind: Service
metadata:
  name: nginx-dp-cpf
spec:
  type: NodePort
  ports:
  - nodePort: 30001
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx-dp


4.查看生成endpoints 


image.png


5.查看service 规则;

image.png


6.查看由kube-proxy 生成的iptables 防火墙规则; (正常规则)

image.png


注意:

      1.如果防火墙规则出现 类似如下规则:

        -A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "default/nginx-dp-cpf: has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 30001 -j REJECT --reject-with icmp-port-unreachable

      2.解决办法;

           (1).net.ipv4.ip_forward = 1  #系统内核路由转发功能

           (2).iptables -P FORWARD ACCEPT  #允许 iptables  FORWARD 链规则通过;

           (3).iptables -P OUTPUT ACCEPT  #允许 iptables  OUTPUT 链规则通过;

           (4).检查deployment labels  和 service labels 设置是否关联正确;

           (5).kubectl get endpoints  --show-labels  #注意此规则和防火墙规则匹配 若出现none 请检查防火墙规则;



7.进行功能测试 (k8s 集群内进行测试);


image.png







    

Guess you like

Origin blog.51cto.com/breaklinux/2442979