harbor1.4.0高可用部署

1.Harbor简介

Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器。

镜像的存储harbor使用的是官方的docker registry(v2命名是distribution)服务去完成。harbor在docker distribution的基础上增加了一些安全、访问控制、管理的功能以满足企业对于镜像仓库的需求。harbor以docker-compose的规范形式组织各个组件,并通过docker-compose工具进行启停。

docker的registry是用本地存储或者s3都是可以的,harbor的功能是在此之上提供用户权限管理、镜像复制等功能,提高使用的registry的效率。Harbor的镜像拷贝功能是通过docker registry的API去拷贝,这种做法屏蔽了繁琐的底层文件操作、不仅可以利用现有docker registry功能不必重复造轮子,而且可以解决冲突和一致性的问题。

1.1 Harbor架构

这里写图片描述

主要组件包括:

  • Proxy:对应启动组件nginx。它是一个nginx反向代理,代理Notary client(镜像认证)、Docker client(镜像上传下载等)和浏览器的访问请求(Core Service)给后端的各服务;
  • UI(Core Service):对应启动组件harbor-ui。底层数据存储使用mysql数据库,主要提供了四个子功能:
    • UI:一个web管理页面ui;
    • API:Harbor暴露的API服务;
    • Auth:用户认证服务,decode后的token中的用户信息在这里进行认证;auth后端可以接db、ldap、uaa三种认证实现;
    • Token服务(上图中未体现):负责根据用户在每个project中的role来为每一个docker push/pull命令issuing一个token,如果从docker client发送给registry的请求没有带token,registry会重定向请求到token服务创建token。
  • Registry:对应启动组件registry。负责存储镜像文件,和处理镜像的pull/push命令。Harbor对镜像进行强制的访问控制,Registry会将客户端的每个pull、push请求转发到token服务来获取有效的token。
  • Admin Service:对应启动组件harbor-adminserver。是系统的配置管理中心附带检查存储用量,ui和jobserver启动时候需要加载adminserver的配置;
  • Job Sevice:对应启动组件harbor-jobservice。负责镜像复制工作的,他和registry通信,从一个registry pull镜像然后push到另一个registry,并记录job_log;
  • Log Collector:对应启动组件harbor-log。日志汇总组件,通过docker的log-driver把日志汇总到一起;
  • Volnerability Scanning:对应启动组件clair。负责镜像扫描
  • Notary:对应启动组件notary。负责镜像认证
  • DB:对应启动组件harbor-db,负责存储project、 user、 role、replication、image_scan、access等的metadata数据。

备注:

2.Harbor HA部署(双docker-compose方式)

harbor官方从1.4.0版本开始支持ha部署模式,详情可见:https://github.com/vmware/harbor/blob/master/docs/high_availability_installation_guide.md

2.1 HA架构

对harbor中的组件根据有/无状态进行划分
无状态组件:

  • Proxy
  • UI
  • Registry
  • Adminserver
  • Jobservice
  • Logs
  • Clair
  • Notary(但是harbor目前不支持ha场景的Notary)

有状态组件:

  • Harbor database(MariaDB)
  • Clair database(PostgresSQL)
  • Notary database(MariaDB)
  • Redis

ha基本思路就是:

  • 部署两套harbor跑Adminserver、UI、Proxy、Log Collector、Registry、Jobservice、Clair这7个无状态组件;前端用keepalived暴露vip。
  • 数据库用一套,数据库可以通过高可用集群方式保障数据高可用;
  • 多个registry的后端使用同一块共享存储(如NFS);
  • 多个UI使用同一个redis共享session;
  • notary在目前还不支持高可用部署;

这里写图片描述

2.2 部署

2.2.1 启动有状态服务

说明:
官方文档为方便验证,有状态组件仅仅简单地用容器启动单实例,如下:

docker run --name redis-server -p 6379:6379 -d redis
docker run -d --restart=always -e MYSQL_ROOT_PASSWORD=123456 -v /dcos/harbor-ha/mariadb:/var/lib/mysql:z -p 3306:3306 --name mariadb vmware/mariadb-photon:10.2.10
docker run -d -e POSTGRES_PASSWORD="123456" -p 5432:5432 postgres:9.6

而我是通过helm来部署redis-ha、mariadb、postgresql应用,helm的使用方法请见:http://blog.csdn.net/liukuan73/article/details/79319900

<1>创建持久化存储
pvc所使用的storageclass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: two-replica-glusterfs-sc
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
  gidMax: "50000"
  gidMin: "40000"
  resturl: http://10.142.21.23:30088
  volumetype: replicate:2
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "123456"
#  secretNamespace: "default"
#  secretName: "heketi-secret"

为mariadb创建pvc:

vim mariadb.pvc-sc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mariadb-pvc
spec:
  storageClassName: two-replica-glusterfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 20Gi

kubectl create -f mariadb.pvc-sc.yaml

为postgresql创建pvc:

vim postgresql.pvc-sc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgresql-pvc
spec:
  storageClassName: two-replica-glusterfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 20Gi

kubectl create -f postgresql.pvc-sc.yaml

<2>helm部署redis-ha、mariadb、postgresql:

cp -r chart-master/stable/redis-ha /dcos/appstore/app-repo/local-charts/
cp -r chart-master/stable/mariadb /dcos/appstore/app-repo/local-charts/
cp -r chart-master/stable/postgresql /dcos/appstore/app-repo/local-charts/

cd /dcos/appstore/app-repo/local-charts

helm package redis-ha --save=false   
helm package mariadb --save=false
helm package postgresql --save=false

helm repo index --url=http://10.142.21.21:8879 .            
helm repo update

helm install --name austin-redis --set rbac.create=false,nodeSelector."node-type"=master,tolerations[0].key=master,tolerations[0].operator=Equal,tolerations[0].value=yes,tolerations[0].effect=NoSchedule local-charts/redis-ha
helm install --name austin-mariadb --set mariadbRootPassword=root,persistence.existingClaim=mariadb-pvc local-charts/mariadb
helm install --name austin-postgresql --set postgresUser=root,postgresPassword=root,persistence.existingClaim=postgresql-pvc,nodeSelector."node-type"=master,tolerations[0].key=master,tolerations[0].operator=Equal,tolerations[0].value=yes,tolerations[0].effect=NoSchedule local-charts/postgresql

备注:

  1. 此时集群内可通过austin-redis-redis-ha.default.svc.cluster.local访问redis
  2. 此时集群内可通过austin-mariadb.default.svc.cluster.local访问mariadb
  3. 此时集群内可通过austin-postgresql.default.svc.cluster.local:5432访问postgresql

<3>为有状态应用创建对集群外的四层转发
因为harbor组件是用docker-compose启动的,组件并不在k8s集群内,所以还要为redis-ha、mariadb、postgresql创建通过ingress-controller(使用hostNetwork模式启动)的四层转发tcp stream以供集群外部访问,参见:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md,对应的configmap如下:

tcp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: default
data:
  3306: "default/austin-mariadb:3306"
  6379: "default/austin-redis-redis-ha-master-svc:6379"
  5432: "default/austin-postgresql:5432"

2.2.2 数据库中加载harbor表结构

在offline安装包解压后的目录中
docker cp ha/registry.sql mariadb:/tmp/
docker exec -it mariadb /bin/bash
mysql -uroot -proot --default-character-set=utf8
create database if not exists registry DEFAULT CHARACTER SET = 'UTF8' DEFAULT COLLATE 'utf8_general_ci';
use registry
source /tmp/registry.sql

2.2.3 keepalived安装配置

<1>安装编译依赖插件:

yum install -y gcc openssl-devel popt-devel

<2>获取源码包:

wget http://www.keepalived.org/software/keepalived-1.4.2.tar.gz
tar -zxvf keepalived-1.4.2.tar.gz

<3>编译安装:

cd keepalived-1.4.2
mkdir /usr/local/keepalived
./configure --prefix=/usr/local/keepalived  
make && make install

<4>配置keepalived:

cp keepalived/etc/init.d/keepalived /etc/init.d/
vim /etc/keepalived/keepalived.conf
内容见:https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/keepalived_active_active.conf

<5>配置健康检查脚本:

vim /usr/local/bin/check.sh
内容见:https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/check.sh
chmod +x /usr/local/bin/check.sh

<6>开启ip转发功能:

vim /etc/sysctl.conf
    net.ipv4.ip_forward = 1
    net.ipv4.ip_nonlocal_bind = 1
sysctl -p

<7>重启keepalived并设置开机自启动:

systemctl restart keepalived
systemctl enable keepalived

<8>重复以上步骤配置第二个节点的keepalived,将/etc/keepalived/keepalived.conf中的priority设置为20,这个数字高的,将会优先获取VIP。**

2.2.4 harbor-1部署

<1>修改harbor.cfg,修改以下内容:

hostname = <vip or fqdn>

db_host = <node1-ip>

redis_url = <redis-host>:6379

clair_db_host = <clairdb-host>
clair_db_password = 123456
clair_db_port = 5432
clair_db_username = postgres
clair_db = postgres

registry_storage_provider_name = filesystem

备注:
此处的registry后端使用filesystem,两个registry的后端存储目录/dcos/harbor/registry(chown 10000:10000 /dcos/harbor/registry)需要mount到同一块NFS上,nfs的部署和使用请看:http://blog.csdn.net/liukuan73/article/details/79649042

<2>修改ha/docker-compose.yml
cp docker-compose.yml make/ha/
vim ha/docker-compose.yml(删除mysql相关的内容,其他的与单节点部署harbor配置无异。)
docker-compose.yml内容如下:

version: '2'
services:
  log:
    image: vmware/harbor-log:v1.4.0
    container_name: harbor-log
    restart: always
    volumes:
      - /dcos/harbor/log/harbor/:/var/log/docker/:z
      - ./common/config/log/:/etc/logrotate.d/:z
    ports:
      - 127.0.0.1:1514:10514
    networks:
      - harbor
  registry:
    image: vmware/registry-photon:v2.6.2-v1.4.0
    container_name: registry
    restart: always
    volumes:
      - /dcos/harbor/registry:/storage:z
      - ./common/config/registry/:/etc/registry/:z
    networks:
      - harbor
    environment:
      - GODEBUG=netdns=cgo
    command:
      ["serve", "/etc/registry/config.yml"]
    depends_on:
      - log
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "registry"
  adminserver:
    image: vmware/harbor-adminserver:v1.4.0
    container_name: harbor-adminserver
    env_file:
      - ./common/config/adminserver/env
    restart: always
    volumes:
      - /dcos/harbor/adminserver/data/config/:/etc/adminserver/config/:z
      - /dcos/harbor/adminserver/data/secretkey:/etc/adminserver/key:z
      - /dcos/harbor/adminserver/data/:/data/:z
    networks:
      - harbor
    depends_on:
      - log
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "adminserver"
  ui:
    image: vmware/harbor-ui:v1.4.0
    container_name: harbor-ui
    env_file:
      - ./common/config/ui/env
    restart: always
    volumes:
      - ./common/config/ui/app.conf:/etc/ui/app.conf:z
      - ./common/config/ui/private_key.pem:/etc/ui/private_key.pem:z
      - ./common/config/ui/certificates/:/etc/ui/certificates/:z
      - /dcos/harbor/ui/secretkey:/etc/ui/key:z
      - /dcos/harbor/ui/ca_download/:/etc/ui/ca/:z
      - /dcos/harbor/ui/psc/:/etc/ui/token/:z
    networks:
      - harbor
    depends_on:
      - log
      - adminserver
      - registry
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "ui"
  jobservice:
    image: vmware/harbor-jobservice:v1.4.0
    container_name: harbor-jobservice
    env_file:
      - ./common/config/jobservice/env
    restart: always
    volumes:
      - /dcos/harbor/job_logs:/var/log/jobs:z
      - ./common/config/jobservice/app.conf:/etc/jobservice/app.conf:z
      - /dcos/harbor/secretkey:/etc/jobservice/key:z
    networks:
      - harbor
    depends_on:
      - ui
      - adminserver
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "jobservice"
  proxy:
    image: vmware/nginx-photon:v1.4.0
    container_name: nginx
    restart: always
    volumes:
      - ./common/config/nginx:/etc/nginx:z
    networks:
      - harbor
    ports:
      - 80:80
      - 443:443
      - 4443:4443
    depends_on:
      - registry
      - ui
      - log
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "proxy"
networks:
  harbor:
    external: false

ha/docker-compose.clair.yml内容如下:

version: '2'
services:
  ui:
    networks:
      harbor-clair:
        aliases:
          - harbor-ui
  jobservice:
    networks:
      - harbor-clair
  registry:
    networks:
      - harbor-clair
  clair:
    networks:
      - harbor-clair
    container_name: clair
    image: vmware/clair-photon:v2.0.1-v1.4.0
    restart: always
    cpu_quota: 150000
    depends_on:
      - log
    volumes:
      - ./common/config/clair:/config
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://127.0.0.1:1514"
        tag: "clair"
networks:
  harbor-clair:
    external: false

备注:
执行install.sh的脚本的时候,docker-compose.yml和docker-compose.clair.yml是从ha目录下拷贝到上层目录使用的,所以,需要修改的是ha目录下的相应配置。

<3>启动harbor1:

./install.sh --ha --with-clair 

<4>修改iptables:

iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 80 -j REDIRECT
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 443 -j REDIRECT

<5>将harbor文件夹打包:

tar -cvf harbor_ha.tar harbor

<6>将harbor包拷贝到后续harbor节点:

scp harbor_ha.tar root@harbor-2:/dcos/install-addons/

2.2.5 harbor-2…n部署

<1>启动harbor:

tar -xvf harbor_ha.tar
cd harbor
./install.sh --ha --with-clair 

<2>修改iptables:

iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 80 -j REDIRECT
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 443 -j REDIRECT

3.Harbor HA部署(kubernetes部署方式)

3.1 部署说明

kubernetes的部署方式可以通过kubernetes的生命周期管理结合k8s平台的持久化存储实现“一定程度”的高可用。

官方文档的部署方式没有提供redis,所以UI还是只能起一个实例,否则多实例没有redis提供共享session的话会有会话丢失的问题。

mysql目前不是集群部署模式,且目前这种方式有且只能启动一个实例(因为多实例共用后端数据会有问题,数据不能及时同步),只不过该实例的数据存在共享存储上(比如glusterfs),相当于将mysql无状态化了,可保证mysql实例挂掉自动在本机或其他主机重启一个实例,使用之前数据。

k8s的部署方式用ingress代替了nginx来实现proxy

3.2 部署

<1>修改make/harbor.cfg配置:

hostname = registry.dcos:30099
db_password = 123456
clair_db_password = 123456
harbor_admin_password = 123456
auth_mode = db_auth

备注:
make文件夹在harbor的源码中,harbor的release包中是没有的。

<2>在每个节点load harbor镜像:

docker load < harbor.v1.2.0.tar.gz

备注:
虽然目前harbor最新的release版本是1.4.0,但是官方文档中目前的harbor_on_kubernetes只支持到1.2.0(1.4.0中harbor_on_kubernetes使用的镜像也依然1.2.0的),我试了下1.4.0版的harbor镜像确实是无法用1.2版本官方给出的kubernetes yaml文件是正常启动的。比如,1.4版的adminserver启动的时候就因为有参数是空而导致程序错误,看了下代码,发现1.4比1.2多了参数,如下图:
这里写图片描述
其他组件也有这些问题,所以,不想费时间去研究到底需要修改哪些配置的话还是乖乖用回到1.2.0的镜像,等官方更新。

<3>修改基本配置:
根据具体需求修改deployment、service、pvc的基本配置,比如修改pod数,通过冗余实现高可用。

make/kubernetes/**/*.svc.yaml: Specify the service of pods.
make/kubernetes/**/*.deploy.yaml: Specify configs of containers.
make/kubernetes/pv/*.pvc.yaml: Persistent Volume Claim.

<4>创建持久化存储卷:
在pv下包含pv和pvc的配置文件,在这里我没有按照官方的pv+pvc的使用方式,而是使用的storageclass+pvc的方式,storageclass使用的glusterfs,具体过程不赘述,可参考之前文章:http://blog.csdn.net/liukuan73/article/details/78511697

创建两复本,reclaim模式为Retain的storageclass,two-replica-glusterfs-sc.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: two-replica-glusterfs-sc
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
  gidMax: "50000"
  gidMin: "40000"
  resturl: http://<heketiIP>:<nodePort>
  volumetype: replicate:2
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "123456"
#  secretNamespace: "default"
#  secretName: "heketi-secret"

创建存储log日志的pvc,log.pvc-sc.yaml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: log-pvc
spec:
  storageClassName: two-replica-glusterfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 20Gi

创建存储registry镜像的pvc,registry.pvc-sc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: registry-pvc
spec:
  storageClassName: two-replica-glusterfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi

创建存储mysql数据库数据的pvc,storage-pvc-sc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: storage-pvc
spec:
  storageClassName: two-replica-glusterfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

<5>生成configmap文件:

python make/kubernetes/k8s-prepare

将会产生以下文件:

make/kubernetes/jobservice/jobservice.cm.yaml
make/kubernetes/mysql/mysql.cm.yaml
make/kubernetes/registry/registry.cm.yaml
make/kubernetes/ui/ui.cm.yaml
make/kubernetes/adminserver/adminserver.cm.yaml
make/kubernetes/ingress.yaml

<6>修改make/kubernetes/ingress.yaml文件:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: harbor
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: ui
          servicePort: 80
      - path: /v2
        backend:
          serviceName: registry
          servicePort: repo
      - path: /service
        backend:
          serviceName: ui
          servicePort: 80

<7>启动:

# create config map
kubectl apply -f make/kubernetes/jobservice/jobservice.cm.yaml
kubectl apply -f make/kubernetes/mysql/mysql.cm.yaml
kubectl apply -f make/kubernetes/registry/registry.cm.yaml
kubectl apply -f make/kubernetes/ui/ui.cm.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.cm.yaml

# create service
kubectl apply -f make/kubernetes/jobservice/jobservice.svc.yaml
kubectl apply -f make/kubernetes/mysql/mysql.svc.yaml
kubectl apply -f make/kubernetes/registry/registry.svc.yaml
kubectl apply -f make/kubernetes/ui/ui.svc.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.svc.yaml

# create k8s deployment
kubectl apply -f make/kubernetes/registry/registry.deploy.yaml
kubectl apply -f make/kubernetes/mysql/mysql.deploy.yaml
kubectl apply -f make/kubernetes/jobservice/jobservice.deploy.yaml
kubectl apply -f make/kubernetes/ui/ui.deploy.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.deploy.yaml

# create k8s ingress
kubectl apply -f make/kubernetes/ingress.yaml

备注:
nginx-ingress-controller注意使用的是quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.8.3,不要使用版本太高的镜像,高版本镜像存在将http请求redirect成https的问题:https://github.com/kubernetes/ingress-nginx/issues/1957https://github.com/kubernetes/ingress-nginx/issues/668https://github.com/kubernetes/ingress-nginx/pull/1854/files

参考

1.http://www.think-foundry.com/architecture-of-harbor-an-open-source-enterprise-class-registry-server/
2.https://github.com/vmware/harbor/blob/master/docs/high_availability_installation_guide.md
3.https://github.com/vmware/harbor/blob/v1.4.0/docs/kubernetes_deployment.md

猜你喜欢

转载自blog.csdn.net/liukuan73/article/details/79634524
今日推荐