Docker Kubernetes k8s from entry to proficient

0. Docker

insert image description here

insert image description here

Docker is an open source software and an open platform for developing, shipping, and running applications. Docker allows users to separate applications in the infrastructure (Infrastructure) to form smaller particles (containers), thereby increasing the speed of software delivery. [1]

Docker containers are similar to virtual machines, but they differ in principle. Containers virtualize the operating system layer, and virtual machines are virtualized hardware, so containers are more portable and efficiently utilize servers. A container is more used to represent a standardized unit of software. Due to the standardization of containers, it can be deployed anywhere regardless of infrastructure differences. In addition, Docker also provides stronger industry isolation compatibility for containers. [2]

Docker uses the resource separation mechanisms in the Linux kernel, such as cgroups, and the Linux kernel namespaces (namespaces) to create independent containers (containers). This can operate under a single Linux entity, avoiding the extra burden of starting a virtual machine [3]. The Linux kernel's support for namespaces completely isolates the application's view of the work environment, including travel trees, networks, user IDs, and mounted filesystems, while the kernel's cgroups provide resource isolation, including CPU, memory, block I/O, and The internet. Starting from version 0.9, Dockers started to include the libcontainer library as a way to directly use the virtualization facilities provided by the Linux kernel, based on the use of abstract virtualization through the interface provided by libvirt's LXC and systemd-nspawn.

According to industry analyst firm "451 Research": "Dockers are the ability to package applications and their virtual containers with dependencies that can be executed on any Linux server, which facilitates flexibility and portability, applications anywhere All can be executed, whether it is a public cloud server, a private cloud server, a single machine, etc.” [4]

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.[6] The service has both free and premium tiers. The software that hosts the containers is called Docker Engine.[7] It was first started in 2013 and is developed by Docker, Inc.

0.1 Docker Engine

Docker Engine (Docker Engine) is a server-client structure application, which mainly includes these parts: Docker daemon, Docker Engine API (page archive backup, stored in Internet Archive), Docker client. [6]

Docker daemons, also known as dockerd, are persistent processes where users manage containers. The daemon listens for requests from the Docker Engine API (page archive backup, stored in the Internet Archive). [7]
The Docker Engine API (page archive backup, in the Internet Archive) is an API for interacting with the Docker daemon. It is a RESTful API, so it can be called not only by Docker clients, but also by commands like wget and curl. [8]
The Docker client, also known as docker, is the primary way most users interact with Docker. The user sends commands to the daemon through the client. Commands will follow the Docker Engine API (page archive backup, in the Internet Archive) [9]

0.2 Docker registry

Docker registry (Docker registry) is used to store Docker images. Docker Hub is a public registry that anyone can use, and by default, Docker will look for images here. [6]

In addition, users can build their own private registry. Users of Docker Datacenter (DDC) can use the Docker Trusted Registry (DTR) directly.

0.3 Objects

The objects of Docker refer to Images, Containers, Networks, Volumes, Plugins and so on. [6]

  • Containers are runnable instances of an image. Containers can be manipulated via API or CLI (command line). [6]
  • Images are a read-only template used to indicate container creation. [6] Image layers are built, and the files that define these layers are called Dockerfiles. [10]
  • Services allow users to add containers across different Docker daemons and divide these containers into managers and workers, allowing them to work together for the swarm. [6]

1. Initial Kubernetes

The open source version of Borg, Google's internal container orchestration system, was started by Google in 2014, and Kubernetes version 1.0 was released in 2015.
Kubernetes (often referred to simply as K8s) is an open-source system for automatically deploying, scaling, and managing "containerized applications." [3] The system was designed by Google and donated to the Cloud Native Computing Foundation (now the Linux Foundation) for use.

It aims to provide "a platform for automated deployment, scaling, and running of application containers across clusters of hosts". [4] It supports a range of container tools, including Docker, etc.

It groups the containers that make up an application into logical units for easy management and discovery. Kubernetes builds on Google's 15 years of experience running production workloads and incorporates the best ideas and practices from the community.
https://github.com/kubernetes/kubernetes

1.1 Cloud Native Computing Foundation (CNCF)

The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project founded in 2015 to help advance container technology [1] and align the tech industry around its development.

It was released with the open-source container cluster manager Kubernetes 1.0, which was contributed by Google as a seed technology to the Linux Foundation. Founding members include Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware. [2] [3] Today, the CNCF is supported by more than 450 members. A program was announced at the inaugural CloudNativeDay in Toronto in August 2016 to establish qualified representatives of CNCF-managed technologies. [4]
https://www.cncf.io/

Exam Verification
https://training.linuxfoundation.cn/certificates/1

insert image description here

2. Design

Kubernetes is structurally designed to define a series of building blocks whose purpose is to provide a mechanism that can collectively provide a mechanism for deploying, maintaining, and scaling applications. The components that make up Kubernetes are designed to be loosely coupled and extensible so that they can serve a variety of different workloads. Extensibility is largely provided by the Kubernetes API, which is primarily used as internal components of extensions and containers running on Kubernetes. [18]

3. Pod

The basic unit of scheduling in Kubernetes is called a "pod". This abstract class allows for the addition of higher-level abstractions to containerized components. A pod generally contains one or more containers, which ensures that they are always on the host and can share resources. [18] Each pod in Kubernetes is assigned a unique (within the cluster) IP address which allows applications to use the same port without conflicting issues. [19] A Pod can define a volume, such as a local disk directory or a network disk, and expose it within a container within the pod. [20]. Pods can be managed manually through the Kubernetes API or delegated to a controller for automatic management.

4. Build

Kubernetes follows a master-slave architecture design. The components of Kubernetes can be divided into components that manage individual node components and control plane parts. [18][26]

The Kubernetes Master is the primary control unit for the cluster, which manages its workloads and directs communication across the system. The Kubernetes control plane consists of individual processes, each of which can run on a single master node or on multiple master nodes supporting high availability clusters [26]. The various components of the Kubernetes control plane are as follows:

4.1 etcd

etcd (page archive backup, stored in the Internet Archive) is a persistent, lightweight, distributed key-value data storage component developed by CoreOS to reliably store cluster configuration data. This component can represent the overall state of the cluster at any given point in time. Other components, after noticing the change in storage, will change to the corresponding state. [26]

4.2 API Server

The API server is a key component and provides Kubernetes internal and external interfaces using the Kubernetes API and JSON over HTTP. [18][27] The API server processes and validates REST requests and updates the state of API objects etcd (page archive backups, stored in the Internet Archive), allowing clients to configure workloads and containers between worker nodes.

4.3 Scheduler

The T-Scheduler is a pluggable component that chooses which node an unscheduled pod (the basic entity managed by the scheduler) should run on based on resource availability. The scheduler tracks resource utilization on each node to ensure that workloads do not exceed available resources. To do this, the scheduler must know resource requirements, resource availability, and various other user-supplied constraints and policy directives, such as quality of service, affinity/anti-affinity requirements, data location, etc. Essentially, the role of the scheduler is to match resource "supply" with workload "demand" to maintain a stable and reliable system. [28]

4.4 Controller Management

The controller manager is the core Kubernetes controller, which includes the DaemonSet controller and the replication controller, among others. This controller communicates with the API server to create, update and delete the resources they manage (pods, service endpoints, etc.) as needed [27]

4.5 Kubernetes Nodes

Node, also known as Worker or Minion, is a single machine (or virtual machine) on which containers (workloads) are deployed. Each node in the cluster must have a container runtime (such as Docker, and other components mentioned below) in order to communicate with the network configuration of those containers.

4.6 Kubelet

The Kubelet is responsible for the running state of each node (i.e. ensuring that all containers on the node are up and running). It handles starting, stopping and maintaining application containers (organized into pods) as directed by the control panel. [18][29]

The Kubelet monitors the state of the pod and if it is not in the desired state, the pod will be redeployed to the same node. Node status messages are delivered to relay hosts every few seconds. When the master detects a node failure, the replication controller observes this state change and starts pods on other healthy nodes. [source request]

4.7 Containers

Containers belong to pods. Containers are the lowest level of microservices that run applications, libraries, and their dependencies. By binding an external IP, the container can be accessed from the external network.

4.8 Kube proxy

Kube proxy is an implementation of a network proxy and load balancer that supports service abstraction as well as other network operations. [18] Based on the IP and port of the incoming request, this component forwards the traffic to the appropriate container specified.

4.9 cAdvisor

cAdvisor is an agent component that monitors and collects resource usage and performance metrics such as CPU, memory, file and network usage of containers on each node.

5. Alibaba Cloud Practice

https://ecs.console.aliyun.com/

Cloud server ECS --> Create instance
insert image description here
Select preemptive instance (cheap), select Availability Zone C for Availability Zone, do not choose random allocation

insert image description here
insert image description here

insert image description here
insert image description here
Click Admin Console
insert image description here

insert image description here
Modify the name master01, node01, node02

insert image description here
SecureCRT SSH2 connects to public network ip, master01, node01, node02
insert image description here
maintain session
insert image description here

google search opsx

https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.24a01b11k30fre

Docker CE is the new name for the free Docker product. Docker CE includes the complete Docker platform and is ideal for developers and operations teams building container apps.

Download address: https://mirrors.aliyun.com/docker-ce/

Configuration method
Ubuntu 14.04/16.04 (install using apt-get)

# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# apt-cache madison docker-ce
#   docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
#   docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
# sudo apt-get -y install docker-ce=[VERSION]

Downtime saving mode (no charge for original downtime)

Remember, if you don't use it, please stop without charge
insert image description here

Guess you like

Origin blog.csdn.net/zgpeace/article/details/124055598