Video source: Station B "Docker&k8s Tutorial Ceiling, Absolutely the best one taught by Station B, this set of learning k8s to get all the core knowledge of Docker is here"
Organize the teacher's course content and test notes while studying, and share them with everyone. Any infringement will be deleted. Thank you for your support!
Attach a summary post: K8S+DevOps Architect Practical Course | Summary
Meet Helm
- Why is there helm?
- What is Helm?
The package manager of kubernetes, "Helm can be regarded as apt-get/yum under Linux system".
- For application publishers, Helm can be used to package applications, manage application dependencies, manage application versions, and publish applications to software warehouses.
- For users, after using Helm, they do not need to understand the Yaml syntax of Kubernetes and write application deployment files. They can download and install the required applications on Kubernetes through Helm.
In addition, Helm also provides powerful functions for software deployment, deletion, upgrade, and rollback of applications on kubernetes.
- Helm version
- helm2
C/S architecture, helm interacts with k8s through Tiller
- Considering security and ease of use, helm3 removed the Tiller server. Helm3 directly uses the kubeconfig file to authenticate and access the APIServer server. The two-way merge is upgraded to a three-way merge patch strategy (old configuration, online status, new Configuration) helm install very_important_app ./very_important_app The number of copies of this application is set to 3. Now, if someone accidentally executes kubectl edit or: kubectl scale -replicas=0 deployment/very_important_app Then, someone on the team finds that very_important_app is down for no reason, try to execute the command: helm rollback very_important_app In Helm 2, this operation will Compare the old configuration with the new configuration, and generate an update patch. Because the person who misoperated only modified the online status of the application (the old configuration was not updated). Helm does nothing when it rolls back. Because there is no difference between the old configuration and the new configuration (both are 3 copies). Then, Helm does not perform rollback, and the number of replicas continues to remain at 0. The helm server local repo warehouse must specify a name when creating an application (or --generate-name is randomly generated)
- An important concept of Helm, chart, is a collection of application information, including configuration templates for various objects, parameter definitions, dependencies, documentation descriptions, etc. Repoistory, a chart warehouse, where charts are stored, and a list file of the Chart package of the Repository is provided for inquiries. Helm can manage multiple different Repositories at the same time. release, when the chart is installed in the kubernetes cluster, a release is generated, which is the running instance of the chart and represents a running application
helm is a package management tool, and a package refers to a chart. Helm can:
- Create a chart from scratch
- Interact with the warehouse, pull, save, and update charts
- Install and uninstall release in kubernetes cluster
- Update, rollback, test release
Installation and Quick Start Practice
Download the latest stable release: https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
For more versions, please refer to: https://github.com/helm/helm/releases
# k8s-master节点
$ wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
$ tar -zxf helm-v3.2.4-linux-amd64.tar.gz
$ cp linux-amd64/helm /usr/local/bin/
# 验证安装
$ helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
$ helm env
# 添加仓库
$ helm repo add stable https://mirror.azure.cn/kubernetes/charts/
# 同步最新charts信息到本地
$ helm repo update
Quick start practice:
Example 1: Install mysql application using helm
# helm 搜索chart包
$ helm search repo mysql
# 从仓库安装
$ helm install mysql stable/mysql
$ helm ls
$ kubectl get all
# 从chart仓库中把chart包下载到本地
$ helm pull stable/mysql
$ tree mysql
Example 2: Create a new nginx chart and install it
$ helm create nginx
# 从本地安装
$ helm install nginx ./nginx
# 安装到别的命名空间luffy
$ helm -n luffy install ./nginx
# 查看
$ helm ls
$ helm -n luffy ls
#
$ kubectl get all
$ kubectl -n luffy get all
Chart template syntax and development
nginx chart implementation analysis
Format:
$ tree nginx/
nginx/
├── charts # 存放子chart
├── Chart.yaml # 该chart的全局定义信息
├── templates # chart运行所需的资源清单模板,用于和values做渲染
│ ├── deployment.yaml
│ ├── _helpers.tpl # 定义全局的命名模板,方便在其他模板中引入使用
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt # helm安装完成后终端的提示信息
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml # 模板使用的默认值信息
Obviously, the resource list is in the templates, and the data comes from values.yaml. The installation process is to fuse the template and data into a resource list recognizable by k8s, and then deploy it to the k8s environment.
Implementation of the analysis template file:
- Reference named template and pass scope { { include "nginx.fullname" . }} include reference named template from _helpers.tpl and pass top-level scope.
- Built-in object .Values .Release.NameRelease: This object describes the relevant information of the release itself, and there are several objects inside it: Release.Name: The name of the release Release.Namespace: The namespace where the release is installed Release.IsUpgrade: If the current operation is For upgrade or rollback, the value is trueRelease.IsInstall: If the current operation is installation, set it to trueRelease.Revision: The revision version number of the release. When installing, the value is 1. Every upgrade or rollback will Add Release.Service: the service that renders the current template. On Helm, the value is actually always HelmValues: the Values value passed to the template from the values.yaml file and the user-provided values file Chart: Get the content of the Chart.yaml file, Any data in this file can be accessed, for example { { .Chart.Name }}-{ { .Chart.Version }} can be rendered as mychart-0.1.0
- template definition { {- define "nginx.fullname" -}} { { {- if .Values.fullnameOverride }} { { - .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} { { - else }} { { - $name := default .Chart.Name .Values.nameOverride }} { { - if contains $name .Release.Name }} { {- .Release.Name | trunc 63 | trimSuffix "-" }} { { - else } } { {- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} { { - end }} { { - end }} { {- end }}{ {- remove Spaces and newlines on the left, -}} Remove spaces and newlines on the right Example apiVersion: v1 kind: ConfigMap metadata: name: { { .Release.Name }}-configmap data: myvalue: "Hello World" drink: { { .Values.favorite.drink | default "tea" | quote }} food: { { .Values.favorite.food | upper | quote }} { { if eq .Values.favorite.drink "coffee" }} mug: true { { end }} 渲染完后是:apiVersion: v1 kind: ConfigMap metadata: name: mychart-1575971172-configmap data: myvalue: "Hello World" drink: "coffee" food: "PIZZA" mug: true
- Pipeline and method trunc means string interception, 63 is passed as a parameter to the trunc method, trimSuffix means to remove the -suffix { { - .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} nindent means the number of spaces in front selector: matchLabels: { {- include "nginx.selectorLabels" . | nindent 6 }}lower means to lowercase the content, and quote means to enclose the content in double quotes value: { { include "mytpl" . | lower | quote }}
- Each if of the conditional judgment statement corresponds to an end{ {- if .Values.fullnameOverride }} ... { { - else }} ... { {- end }} is usually used to control according to the switch defined in values.yaml Display in template: { {- if not .Values.autoscaling.enabled }} replicas: { { .Values.replicaCount }} { {- end }}
- Define variables, which can be referenced by variable names in templates
{
{- $name := default .Chart.Name .Values.nameOverride }}
- Traverse the data of values { {- with .Values.nodeSelector }} nodeSelector: { {- toYaml . | nindent 8 }} { {- end }}
toYaml handles escaping and special characters in the value, " http://kubernetes.io/role "=master , name="value1,value2" similar cases
- default set the default value
image: "{
{ .Values.image.repository }}:{
{ .Values.image.tag | default .Chart.AppVersion }}"
hpa.yaml
{
{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {
{ include "nginx.fullname" . }}
labels:
{
{- include "nginx.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {
{ include "nginx.fullname" . }}
minReplicas: {
{ .Values.autoscaling.minReplicas }}
maxReplicas: {
{ .Values.autoscaling.maxReplicas }}
metrics:
{
{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {
{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{
{- end }}
{
{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {
{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{
{- end }}
{
{- end }}
Assigned when creating the application
- set method
# 改变副本数和resource值
$ helm install nginx-2 ./nginx --set replicaCount=2 --set resources.limits.cpu=200m --set resources.limits.memory=256Mi
- The way of value file
$ cat nginx-values.yaml
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
ingress:
enabled: true
hosts:
- host: chart-example.luffy.com
paths:
- /
$ helm install -f nginx-values.yaml nginx-3 ./nginx
More syntax reference:
https://helm.sh/docs/topics/charts/
The problem of failure to deploy mysql
Actual combat: use Helm to deploy Harbor image and chart warehouse
Harbor stepping on the pit deployment
Architecture https://github.com/goharbor/harbor/wiki/Architecture-Overview-of-Harbor
- Core, the core component API Server, receives and processes user requests Config Manager: configuration of all systems, such as authentication, email, certificate configuration, etc. Project Manager: project management Quota Manager: quota management Chart Controller: chart management Replication Controller: mirror copy controller, Can realize mirror synchronization with different types of warehouses. Distribution (docker registry) Docker Hub...Scan Manager: scan management, introduce third-party components, and perform mirror security scanning Registry Driver: mirror warehouse driver, currently using docker registry
- Job Service, perform asynchronous tasks, such as synchronizing mirror information
- Log Collector, a unified log collector, collects logs of each module
- GC Controller
- Chart Museum, chart warehouse service, third party
- Docker Registry, mirror warehouse service
- kv-storage, redis cache service, used by job service, stores job metadata
- local/remote storage, storage service, compare mirror storage
- SQL Database, postgresl, storing metadata such as users and projects
Usually used as an enterprise-level mirror warehouse service, the actual function is much more powerful.
There are many components, so use helm to deploy
# 添加harbor chart仓库
$ helm repo add harbor https://helm.goharbor.io
# 搜索harbor的chart
$ helm search repo harbor
# 不知道如何部署,因此拉到本地
$ helm pull harbor/harbor --version 1.4.1
create pvc
$ kubectl create namespace harbor
$ cat harbor-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: harbor-pvc
namespace: harbor
spec:
accessModes:
- ReadWriteOnce
storageClassName: dynamic-cephfs
resources:
requests:
storage: 20Gi
Modify the harbor configuration:
- Open ingress access
- externalURL, the web access entry, is the same as the domain name of the ingress
- Persistence, using cephfs docked with PVC
- harborAdminPassword: "Harbor12345", administrator default account admin/Harbor12345
- open chartmuseum
- Clair and trivy vulnerability scanning components, not enabled yet
helm create:
# 使用本地chart安装
$ helm install harbor ./harbor -n harbor
Stepping on pit 1: Redis persists data directory permissions, resulting in inability to log in
The redis data directory, /var/lib/redis, needs to set redis user and user group permissions
initContainers:
- name: "change-permission-of-directory"
image: {
{ .Values.database.internal.initContainerImage.repository }}:{
{ .Values.database.internal.initContainerImage.tag }}
imagePullPolicy: {
{ .Values.imagePullPolicy }}
command: ["/bin/sh"]
args: ["-c", "chown -R 999:999 /var/lib/redis"]
volumeMounts:
- name: data
mountPath: /var/lib/redis
subPath: {
{ $redis.subPath }}
Trample 2: The image storage directory permission of the registry component causes the image push to fail
Registry's image storage directory, you need to set the user and user group of the registry user, otherwise the image push will fail
initContainers:
- name: "change-permission-of-directory"
image: {
{ .Values.database.internal.initContainerImage.repository }}:{
{ .Values.database.internal.initContainerImage.tag }}
imagePullPolicy: {
{ .Values.imagePullPolicy }}
command: ["/bin/sh"]
args: ["-c", "chown -R 10000:10000 {
{ .Values.persistence.imageChartStorage.filesystem.rootdirectory }}"]
volumeMounts:
- name: registry-data
mountPath: {
{ .Values.persistence.imageChartStorage.filesystem.rootdirectory }}
subPath: {
{ .Values.persistence.persistentVolumeClaim.registry.subPath }}
Trample 3: chartmuseum storage directory permissions, resulting in chart push failure
initContainers:
- name: "change-permission-of-directory"
image: {
{ .Values.database.internal.initContainerImage.repository }}:{
{ .Values.database.internal.initContainerImage.tag }}
imagePullPolicy: {
{ .Values.imagePullPolicy }}
command: ["/bin/sh"]
args: ["-c", "chown -R 10000:10000 /chart_storage"]
volumeMounts:
- name: chartmuseum-data
mountPath: /chart_storage
subPath: {
{ .Values.persistence.persistentVolumeClaim.chartmuseum.subPath }}
After updating the content, execute the update release
$ helm upgrade harbor -n harbor ./
Push the image to the Harbor repository
Configure hosts and docker non-secure warehouse:
$ cat /etc/hosts
...
172.21.51.67 k8s-master core.harbor.domain
...
$ cat /etc/docker/daemon.json
{
"insecure-registries": [
"172.21.51.67:5000",
"core.harbor.domain"
],
"registry-mirrors" : [
"https://8xpk5wnt.mirror.aliyuncs.com"
]
}
#
$ systemctl restart docker
# 使用账户密码登录admin/Harbor12345
$ docker login core.harbor.domain
$ docker tag nginx:alpine core.harbor.domain/library/nginx:alpine
$ docker push core.harbor.domain/library/nginx:alpine
Push chart to Harbor warehouse
helm3 does not have the helm push plugin installed by default and needs to be installed manually. Plug-in address https://github.com/chartmuseum/helm-push
Install the plugin:
$ helm plugin install https://github.com/chartmuseum/helm-push
Offline installation:
$ helm plugin install ./helm-push
add repo
$ helm repo add myharbor https://harbor.luffy.com/chartrepo/luffy
# x509错误
# 添加证书信任,根证书为配置给ingress使用的证书
$ kubectl get secret harbor-ingress -n harbor -o jsonpath="{.data.ca\.crt}" | base64 -d >harbor.ca.crt
$ cp harbor.ca.crt /etc/pki/ca-trust/source/anchors
$ update-ca-trust enable; update-ca-trust extract
# 再次添加
$ helm repo add luffy https://harbor.luffy.com/chartrepo/luffy --ca-file=harbor.ca.crt --username admin --password Harbor12345
$ helm repo ls
Push the chart to the warehouse:
$ helm push harbor luffy --ca-file=harbor.ca.crt -u admin -p Harbor12345
View the chart of the harbor warehouse