Kubernets deployment Harbor (latest edition)

Container, mirror, warehouse known as the three basic components of the container, Fun K8S naturally can not escape the fate mirrored warehouse building, the need for private warehouse mirror I do not think is necessary to reiterate here. Today, this article describes the experimental environment in which K8S complete the deployment process to build a private harbor mirroring warehouse.

K8S Harbor must use it as a mirror warehouse? Of course, not necessarily, but you'll know by contrast, we are trying Harbor in every respect and has become almost your only option, like K8S as the de facto standard container choreographed like you almost no second better select.

This is also the author painstakingly pondering, and be sure to deploy it successfully and dedicated to the purpose of writing this article the reader.

Cut the crap, get down to business, introduce experimental environment:

1,CentOS 7 minimal

2, K8S master single node 1.15.5; (1.16 change due to the large, all enable the highest version 1.15)

3,helm 2.15

4,harbor


helm deploy
a, Helm client installation


lot Helm installation, this uses binary installed. More Helm installation method can refer to the official help documentation.

Method 1: Use a script provided by the official one-click installation

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Two, Helm server installation Tiller

Note: socat install software (yum install -y socat) on each node in the cluster K8S, otherwise errors will be reported as follows:

error forwarding port 44134 to pod dc6da4ab99ad9c497c0cef1776b9dd18e0a612d507e2746ed63d36ef40f30174, uid : unable to do port forwarding: socat not found.
Error: cannot connect to Tiller

centos7 is installed by default, so ignore me, please confirm the installation.

Tiller is deployed in Kubernetes Deployment cluster, you can simply use the following simple instructions to complete the installation:

helm init

Third, to authorize Tiller

because the server Tiller Helm is a deployment Deployment in Kubernetes in Kube-System Namespace, it will go to create a connection Kube-Api in Kubernetes in and remove applications.
From the beginning of Kubernetes version 1.6, API Server enabled RBAC authorization. The default is not defined ServiceAccount current Tiller authorized the deployment, which can result in being refused access API Server. So we need to explicitly authorize the deployment of added Tiller.
Creating Kubernetes service account and binding role Tiller:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

Use kubectl patch update API objects:

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

To see if authorization is successful  

kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount
    serviceAccount: tiller
    serviceAccountName: tiller

Fourth, to verify whether the installation was successful Tiller   

kubectl -n kube-system get pods|grep tiller
tiller-deploy-6d68f5c78f-nql2z          1/1       Running   0          5m

helm version
Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}

harbor installation

Specific look at the official presentation https://github.com/goharbor/harbor-helm
add helm Warehouse:

helm repo add harbor https://helm.goharbor.io

The official introduction tutorial assumes that you are the master (my heart silently greeting here it is), here are the basic points of the detailed operation:

First, the search for harbor chart items:

helm search harbor

Pictures .png

Second, downloaded to the local, easy to modify values.yaml:

helm fetch harbor/harbor

Unzip project package download, unzip and enter the path modification values.yaml file:

 tar zxvf harbor-1.2.1.tgz 
 cd harbor
 vim values.yaml

Modify parameters can refer to the official description, but for a beginner needs to be modified in addition to data persistence, all other defaults, and then one by one behind the familiar Review:

Change values.yaml all storageClass as storageClass: "nfs", which I've deployed advance,

If you missed it, you can go back and look at my tutorial " Preliminary Kubernetes dynamic volume storage (NFS) ", put it back on: https://blog.51cto.com/kingda/2440315;

Of course, you can directly modify this file a statement:

sed -i 's#storageClass: ""#storageClass: "nfs"#g' values.yaml

Pictures .png

Elsewhere all the default, and then start the installation:

helm install --name harbor-v1 .  --wait --timeout 1500 --debug --namespace harbor

Because PV is automatically created and PVC may not work as fast as you think, so many pod will initially lead to an error, it must be a little patient and wait until it is ready to restart several times.

One above installation command execution may have been stuck in the state, be sure to have a little patience and wait for all pod have started successfully, helm will be installed to detect the status of all pod and executed.


Since we are only using the default installation settings, the default is to start the helm ingress way to expose harbor service, so if you have not pre-installed ingress controller, then, while not affecting the normal operation of the harbor but you can not access it,

Therefore, the following describes how to install ingress controller:

There K8S official source, but here is posted directly to a key installation script file:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  #replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
---

Use kubectl installed.

If you've resolved the default ingress access to any node K8S domain, then directly use the default account and password to log in.

Guess you like

Origin blog.51cto.com/kingda/2444261