再战 k8s(十八):RKE

RKE 简介

Rancher Kubernetes Engine,简称 RKE,是一个经过 CNCF 认证的 Kubernetes 安装程序。RKE 支持多种操作系统,包括 MacOS、Linux 和 Windows,可以在裸金属服务器(BMS)和虚拟服务器(Virtualized Server)上运行。

市面上的其他 Kubernetes 部署工具存在一个共性问题:在使用工具之前需要满足的先决条件比较多,例如,在使用工具前需要完成安装 kubelet、配置网络等一系列的繁琐操作。而 RKE 简化了部署 Kubernetes 集群的过程,只有一个先决条件:只要您使用的 Docker 是 RKE 支持的版本,就可以通过 RKE 安装 Kubernetes,部署和运行 Kubernetes 集群。RKE 既可以单独使用,作为创建 Kubernetes 集群的工具,也可以配合 Rancher2.x 使用,作为 Rancher2.x 的组件,在 Rancher 中部署和运行 Kubernetes 集群。


创建集群配置文件

RKE 使用集群配置文件cluster.yml规划集群中的节点,例如集群中应该包含哪些节点,如何部署 Kubernetes。您可以通过该文件修改很多集群配置选项。在 RKE 的文档中,我们提供的代码示例假设集群中只有一个节点

创建集群配置文件cluster.yml的方式有两种:

  • 使用 minimal cluster.yml创建集群配置文件,然后将您使用的节点的相关信息添加到文件中。
  • 使用rke config命令创建集群配置文件,然后将集群参数逐个输入到该文件中。

使用rke config

运行rke config命令,在当前路径下创建 cluster.yml文件。这条命令会引导您输入创建集群所需的所有参数,详情请参考集群配置选项

rke config --name cluster.yml

其他配置选项

在原有创建集群配置文件命令的基础上,加上 --empty ,可以创建一个空白的集群配置文件。

rke config --empty --name cluster.yml

您也可以使用--print,将cluster.yml文件的内容显示出来。

rke config --print

高可用集群

RKE 适配了高可用集群,您可以在cluster.yml文件中配置多个controlplane节点。RKE 会把 master 节点的组件部署在所有被列为controlplane的节点上,同时把 kubelets 的默认连接地址配置为127.0.0.1:6443。这个地址是nginx-proxy请求所有 master 节点的地址

创建高可用集群需要指定两个或更多的节点作为controlplane

证书

v0.2.0 开始可用

默认情况下,Kubernetes 集群需要用到证书,而 RKE 会自动为所有集群组件生成证书。您也可以使用自定义证书。部署集群后,您可以管理这些自动生成的证书,详情请参考管理自动生成的证书


使用 RKE 部署 Kubernetes 集群

创建了cluster.yml文件后,您可以运行以下命令部署集群。这条命令默认cluster.yml已经保存在了您运行命令所处的路径下。

rke up

INFO[0000] Building Kubernetes cluster

INFO[0000] [dialer] Setup tunnel *for* host [10.0.0.1]

INFO[0000] [network] Deploying port listener containers

INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]

...

INFO[0101] Finished building Kubernetes cluster successfully

运行该命令后,返回的最后一行信息应该是Finished building Kubernetes cluster successfully,表示成功部署集群,可以开始使用集群。在创建 Kubernetes 集群的过程中,会创建一个kubeconfig 文件,它的文件名称是 kube_config_cluster.yml,您可以使用它控制 Kubernetes 集群。

说明

如果您之前使用的集群配置文件名称不是cluster.yml,那么这里生成的 kube_config 文件的名称也会随之变化为kube_config*<FILE_NAME>.yml


保存文件#

重要

请保存下文中列出来的所有文件,这些文件可以用于维护集群,排查问题和升级集群。

请将这些文件复制并保存到安全的位置:

  • cluster.yml:RKE 集群的配置文件。
  • kube_config_cluster.yml:该集群的Kubeconfig 文件包含了获取该集群所有权限的认证凭据。
  • cluster.rkestateKubernetes 集群状态文件,包含了获取该集群所有权限的认证凭据,使用 RKE v0.2.0 时才会创建这个文件。
说明

kube_config_cluster.ymlcluster.rkestate两个文件的名称取决于您如何命名 RKE 集群配置文件,如果您修改的集群配置文件的名称,那么后两个文件的名称可能会跟上面列出来的文件名称不一样。

Kubernetes 集群状态文件#

Kubernetes 集群状态文件用集群配置文件cluster.yml以及集群中的组件证书组成。不同版本的 RKE 会将文件保存在不同的地方。

v0.2.0 以及更新版本的 RKE 会在保存集群配置文件 cluster.yml的路径下创建一个.rkestate文件。该文件包含当前集群的状态、RKE 配置信息和证书信息。请妥善保存该文件的副本。

v0.2.0 之前的版本的 RKE 会将集群状态存储以密文的形式存储。更新集群状态时,RKE 拉取这些密文,修改集群状态,然后将新的集群状态再次存储为密文。

相关操作#

完成 RKE 安装后,可能还需要完成以下两个相关操作:


cluster.yml 文件示例

您可通过编辑 RKE 的集群配置文件cluster.yml,完成多种配置选项。以下是最小文件示例和完整文件示例。

**说明:**如果您使用的是 Rancher v2.0.5 或 v2.0.6,使用集群配置文件,配置集群选项时,服务名称不能含有除了英文字母和下划线外的其他字符。

最小文件示例#

nodes:

- address: 1.2.3.4

​ user: ubuntu

​ role:

​ - controlplane

​ - etcd

​ - worker

完整文件示例#

nodes:

- address: 1.1.1.1

​ user: ubuntu

​ role:

​ - controlplane

​ - etcd

​ port: 2222

​ docker_socket: /var/run/docker.sock

​ - address: 2.2.2.2

​ user: ubuntu

​ role:

​ - worker

​ ssh_key_path: /home/user/.ssh/id_rsa

​ ssh_key: |-

​ -----BEGIN RSA PRIVATE KEY-----

​ -----END RSA PRIVATE KEY-----

​ ssh_cert_path: /home/user/.ssh/test-key-cert.pub

​ ssh_cert: |-

[email protected] AAAAHHNzaC1yc2EtY2VydC12MDFAb3Bl…

​ - address: example.com

​ user: ubuntu

​ role:

​ - worker

​ hostname_override: node3

​ internal_address: 192.168.1.6

​ labels:

​ app: ingress

​ taints:

​ - key: test-key

​ value: test-value

​ effect: NoSchedule

# If set to true, RKE will not fail when unsupported Docker version

# are found

ignore_docker_version: false

# Enable running cri-dockerd

# Up to Kubernetes 1.23, kubelet contained code called dockershim

# to support Docker runtime. The replacement is called cri-dockerd

# and should be enabled if you want to keep using Docker as your

# container runtime

# Only available to enable in Kubernetes 1.21 and higher

enable_cri_dockerd: true

# Cluster level SSH private key

# Used if no ssh information is set for the node

ssh_key_path: ~/.ssh/test

# Enable use of SSH agent to use SSH private keys with passphrase

# This requires the environment SSH_AUTH_SOCK configured pointing

#to your SSH agent which has the private key added

ssh_agent_auth: true

# List of registry credentials

# If you are using a Docker Hub registry, you can omit the url

# or set it to docker.io

# is_default set to true will override the system default

# registry set in the global settings

private_registries:

​ - url: registry.com

​ user: Username

​ password: password

​ is_default: true

# Bastion/Jump host configuration

bastion_host:

​ address: x.x.x.x

​ user: ubuntu

​ port: 22

​ ssh_key_path: /home/user/.ssh/bastion_rsa

# or

# ssh_key: |-

# -----BEGIN RSA PRIVATE KEY-----

#

# -----END RSA PRIVATE KEY-----

# Set the name of the Kubernetes cluster

cluster_name: mycluster

# The Kubernetes version used. The default versions of Kubernetes

# are tied to specific versions of the system images.

#

# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go

#

# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go

#

# In case the kubernetes_version and kubernetes image in

# system_images are defined, the system_images configuration

# will take precedence over kubernetes_version.

kubernetes_version: v1.10.3-rancher2

# System Images are defaulted to a tag that is mapped to a specific

# Kubernetes Version and not required in a cluster.yml.

# Each individual system image can be specified if you want to use a different tag.

#

# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go

#

# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go

#

system_images:

​ kubernetes: rancher/hyperkube:v1.10.3-rancher2

​ etcd: rancher/coreos-etcd:v3.1.12

​ alpine: rancher/rke-tools:v0.1.9

​ nginx_proxy: rancher/rke-tools:v0.1.9

​ cert_downloader: rancher/rke-tools:v0.1.9

​ kubernetes_services_sidecar: rancher/rke-tools:v0.1.9

​ kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.8

​ dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8

​ kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.8

​ kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0

​ pod_infra_container: rancher/pause-amd64:3.1

services:

​ etcd:

# Custom uid/guid for etcd directory and files

​ uid: 52034

​ gid: 52034

# if external etcd is used

# path: /etcdcluster

# external_urls:

# - https://etcd-example.com:2379

# ca_cert: |-

# -----BEGIN CERTIFICATE-----

# xxxxxxxxxx

# -----END CERTIFICATE-----

# cert: |-

# -----BEGIN CERTIFICATE-----

# xxxxxxxxxx

# -----END CERTIFICATE-----

# key: |-

# -----BEGIN PRIVATE KEY-----

# xxxxxxxxxx

# -----END PRIVATE KEY-----

# Note for Rancher v2.0.5 and v2.0.6 users: If you are configuring

# Cluster Options using a Config File when creating Rancher Launched

# Kubernetes, the names of services should contain underscores

# only: kube_api.

​ kube-api:

# IP range for any services created on Kubernetes

# This must match the service_cluster_ip_range in kube-controller

​ service_cluster_ip_range: 10.43.0.0/16

# Expose a different port range for NodePort services

​ service_node_port_range: 30000-32767

​ pod_security_policy: false

# Encrypt secret data at Rest

# Available as of v0.3.1

​ secrets_encryption_config:

​ enabled: true

​ custom_config:

​ apiVersion: apiserver.config.k8s.io/v1

​ kind: EncryptionConfiguration

​ resources:

​ - resources:

​ - secrets

​ providers:

​ - aescbc:

​ keys:

​ - name: k-fw5hn

​ secret: RTczRjFDODMwQzAyMDVBREU4NDJBMUZFNDhCNzM5N0I=

​ - identity: {}

# Enable audit logging

# Available as of v1.0.0

​ audit_log:

​ enabled: true

​ configuration:

​ max_age: 6

​ max_backup: 6

​ max_size: 110

​ path: /var/log/kube-audit/audit-log.json

​ format: json

​ policy:

​ apiVersion: audit.k8s.io/v1 # This is required.

​ kind: Policy

​ omitStages:

​ - “RequestReceived”

​ rules:

# Log pod changes at RequestResponse level

​ - level: RequestResponse

​ resources:

​ - group: “”

# Resource “pods” doesn’t match requests to any subresource of pods,

# which is consistent with the RBAC policy.

​ resources: [“pods”]

# Using the EventRateLimit admission control enforces a limit on the number of events

# that the API Server will accept in a given time period

# Available as of v1.0.0

​ event_rate_limit:

​ enabled: true

​ configuration:

​ apiVersion: eventratelimit.admission.k8s.io/v1alpha1

​ kind: Configuration

​ limits:

​ - type: Server

​ qps: 6000

​ burst: 30000

# Enable AlwaysPullImages Admission controller plugin

# Available as of v0.2.0

​ always_pull_images: false

# Add additional arguments to the kubernetes API server

# This WILL OVERRIDE any existing defaults

​ extra_args:

# Enable audit log to stdout

​ audit-log-path: “-”

# Increase number of delete workers

​ delete-collection-workers: 3

# Set the level of log output to debug-level

​ v: 4

# Note for Rancher 2 users: If you are configuring Cluster Options

# using a Config File when creating Rancher Launched Kubernetes,

# the names of services should contain underscores only:

# kube_controller. This only applies to Rancher v2.0.5 and v2.0.6.

​ kube-controller:

# CIDR pool used to assign IP addresses to pods in the cluster

​ cluster_cidr: 10.42.0.0/16

# IP range for any services created on Kubernetes

# This must match the service_cluster_ip_range in kube-api

​ service_cluster_ip_range: 10.43.0.0/16

# Add additional arguments to the kubernetes API server

# This WILL OVERRIDE any existing defaults

​ extra_args:

# Set the level of log output to debug-level

​ v: 4

# Enable RotateKubeletServerCertificate feature gate

​ feature-gates: RotateKubeletServerCertificate=true

# Enable TLS Certificates management

# https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/

​ cluster-signing-cert-file: “/etc/kubernetes/ssl/kube-ca.pem”

​ cluster-signing-key-file: “/etc/kubernetes/ssl/kube-ca-key.pem”

​ kubelet:

# Base domain for the cluster

​ cluster_domain: cluster.local

# IP address for the DNS service endpoint

​ cluster_dns_server: 10.43.0.10

# Fail if swap is on

​ fail_swap_on: false

# Configure pod-infra-container-image argument

​ pod-infra-container-image: “k8s.gcr.io/pause:3.2”

# Generate a certificate signed by the kube-ca Certificate Authority

# for the kubelet to use as a server certificate

# Available as of v1.0.0

​ generate_serving_certificate: true

​ extra_args:

# Set max pods to 250 instead of default 110

​ max-pods: 250

# Enable RotateKubeletServerCertificate feature gate

​ feature-gates: RotateKubeletServerCertificate=true

# Optionally define additional volume binds to a service

​ extra_binds:

​ - “/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins”

​ scheduler:

​ extra_args:

# Set the level of log output to debug-level

​ v: 4

​ kubeproxy:

​ extra_args:

# Set the level of log output to debug-level

​ v: 4

# Currently, only authentication strategy supported is x509.

# You can optionally create additional SANs (hostnames or IPs) to

# add to the API server PKI certificate.

# This is useful if you want to use a load balancer for the

# control plane servers.

authentication:

strategy: x509

sans:

​ - “10.18.160.10”

​ - “my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com”

# Kubernetes Authorization mode

# Use mode: rbac to enable RBAC

# Use mode: none to disable authorization

authorization:

mode: rbac

# If you want to set a Kubernetes cloud provider, you specify

# the name and configuration

cloud_provider:

name: aws

# Add-ons are deployed using kubernetes jobs. RKE will give

# up on trying to get the job status after this timeout in seconds…

addon_job_timeout: 30

# Specify network plugin-in (canal, calico, flannel, weave, or none)

network:

plugin: canal

# Specify MTU

mtu: 1400

options:

# Configure interface to use for Canal

​ canal_iface: eth1

​ canal_flannel_backend_type: vxlan

# Available as of v1.2.6

​ canal_autoscaler_priority_class_name: system-cluster-critical

​ canal_priority_class_name: system-cluster-critical

# Available as of v1.2.4

tolerations:

- key: “node.kubernetes.io/unreachable”

​ operator: “Exists”

​ effect: “NoExecute”

​ tolerationseconds: 300

- key: “node.kubernetes.io/not-ready”

​ operator: “Exists”

​ effect: “NoExecute”

​ tolerationseconds: 300

# Available as of v1.1.0

update_strategy:

​ strategy: RollingUpdate

​ rollingUpdate:

​ maxUnavailable: 6

# Specify DNS provider (coredns or kube-dns)

dns:

provider: coredns

# Available as of v1.1.0

update_strategy:

​ strategy: RollingUpdate

​ rollingUpdate:

​ maxUnavailable: 20%

​ maxSurge: 15%

linear_autoscaler_params:

​ cores_per_replica: 0.34

​ nodes_per_replica: 4

​ prevent_single_point_failure: true

​ min: 2

​ max: 3

# Specify monitoring provider (metrics-server)

monitoring:

provider: metrics-server

# Available as of v1.1.0

update_strategy:

​ strategy: RollingUpdate

​ rollingUpdate:

​ maxUnavailable: 8

# Currently only nginx ingress provider is supported.

# To disable ingress controller, set provider: none

# node_selector controls ingress placement and is optional

ingress:

provider: nginx

node_selector:

​ app: ingress

# Available as of v1.1.0

update_strategy:

​ strategy: RollingUpdate

​ rollingUpdate:

​ maxUnavailable: 5

# All add-on manifests MUST specify a namespace

addons: |-

apiVersion: v1

kind: Pod

metadata:

​ name: my-nginx

​ namespace: default

spec:

​ containers:

​ - name: my-nginx

​ image: nginx

​ ports:

​ - containerPort: 80

addons_include:

- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml

- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml

- /path/to/manifest

猜你喜欢

转载自blog.csdn.net/qq_43762191/article/details/123439537