Kubernetes uses S3FS to use AWS S3 as persistent storage

kubernetes-s3

The k8s backend uses AWS S3 as persistent storage. This article describes the usage and steps in detail.

Source code acquisition

tengfeiwu / kubernetes-s3

S3FS architecture diagram used in Kubernetes pod

64FMEq.png

Create AWS IAM account and permissions

Log in to the AWS console to enter IAM, create a new IAM account to give S3 read and write permissions, and write down the access key ID and private access key of the changed account. To be used in the configmap file later~

Create a namespace

Create a open-falcon-monitoringnamespace here for later monitoring, the content is as follows:

# cat ns-open-falcon.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: open-falcon-monitoring
# 创建namespace
kubectl apply -f ns-open-falcon.yaml

Create mirror warehouse

Here you can use AWS ECR or other public clouds or your own private warehouses. For the Alibaba Cloud image service I use here, you can register one for free if you need it.

Alibaba Cloud Image Warehouse Information

# 阿里云Docker Registry
registry.cn-shanghai.aliyuncs.com
# 镜像仓库
open-falcon-s3

Create pull/push mirror secret

Here, under the open-falcon-monitoringnamespace, create open-falcon-registry-secreta secret with the name as follows:

kubectl create secret docker-registry open-falcon-registry-secret -n open-falcon-monitoring \
  --docker-server=registry.cn-shanghai.aliyuncs.com \
  --docker-username=<阿里云控制台账号> \
  --docker-password=<阿里云控制台密码> \
  [email protected]

Create S3 persistent storage process

Create bucket

Log in to the AWS console, enter S3, and create a bucket open-falcon-monitoring.

Configure configmap

cd yaml/ && cp configmap_secrets_template.yaml configmap_secrets.yaml
# cat configmap_secrets.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: s3-config
  namespace: open-falcon-monitoring
data:
  S3_BUCKET: open-falcon-monitoring
  AWS_KEY: <AWS IAM 访问密钥ID>
  AWS_SECRET_KEY: <AWS IAM 私有访问密钥>

Build and deploy

Modify the build.sh file

The following is the complete file with the following content:

#!/usr/bin/env bash

########################################################################################################################
# PREREQUISTITS
########################################################################################################################
#
# - ensure that you have a valid Artifactory or other Docker registry account
# - Create your image pull secret in your namespace
#   kubectl create secret docker-registry artifactory --docker-server=<YOUR-REGISTRY>.docker.repositories.sap.ondemand.com --docker-username=<USERNAME> --docker-password=<PASSWORD> --docker-email=<EMAIL> -n <NAMESPACE>
# - change the settings below arcording your settings
#
# usage:
# Call this script with the version to build and push to the registry. After build/push the
# yaml/* files are deployed into your cluster
#
#  ./build.sh 1.0
#
VERSION=$1
PROJECT=open-falcon-s3 # 修改成自己的仓库名
REPOSITORY=registry.cn-shanghai.aliyuncs.com/ai-voice-test # 仓库地址,这里使用的是阿里云

# causes the shell to exit if any subcommand or pipeline returns a non-zero status.
set -e
# set debug mode
#set -x

########################################################################################################################
# build the new docker image
########################################################################################################################
#
echo '>>> Building new image'
# Due to a bug in Docker we need to analyse the log to find out if build passed (see https://github.com/dotcloud/docker/issues/1875)
docker build --no-cache=true -t $REPOSITORY/$PROJECT:$VERSION . | tee /tmp/docker_build_result.log
RESULT=$(cat /tmp/docker_build_result.log | tail -n 1)
if [[ "$RESULT" != *Successfully* ]];
then
  exit -1
fi

########################################################################################################################
# push the docker image to your registry
########################################################################################################################
#
echo '>>> Push new image'
docker push $REPOSITORY/$PROJECT:$VERSION

########################################################################################################################
# deploy your YAML files into your kubernetes cluster
########################################################################################################################
#
# (and replace some placeholder in the yaml files...
# It is recommended to use HELM for bigger projects and more dynamic deployments
#
kubectl apply -f ./yaml/configmap_secrets.yaml
# Apply the YAML passed into stdin and replace the version string first
#cat ./yaml/daemonset.yaml | sed "s/$REPOSITORY\/$PROJECT/$REPOSITORY\/$PROJECT:$VERSION/g" | kubectl apply -f -

Modify Dockerfile

Add domestic sources, otherwise you will cry, the full content is as follows:

###############################################################################
# The FUSE driver needs elevated privileges, run Docker with --privileged=true
###############################################################################

FROM alpine:latest

ENV MNT_POINT /var/s3
ENV IAM_ROLE=none  # 我这里并没创建 IAM 角色,所以写none
ENV S3_REGION 'ap-southeast-1'

VOLUME /var/s3

ARG S3FS_VERSION=v1.89

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories

RUN apk --update add bash fuse libcurl libxml2 libstdc++ libgcc alpine-sdk automake autoconf libxml2-dev fuse-dev curl-dev git; \
    git clone https://github.com/s3fs-fuse/s3fs-fuse.git; \
    cd s3fs-fuse; \
    git checkout tags/${S3FS_VERSION}; \
    ./autogen.sh; \
    ./configure --prefix=/usr ; \
    make; \
    make install; \
    make clean; \
    rm -rf /var/cache/apk/*; \
    apk del git automake autoconf;

RUN sed -i s/"#user_allow_other"/"user_allow_other"/g /etc/fuse.conf

COPY docker-entrypoint.sh /
RUN chmod 777 /docker-entrypoint.sh
CMD /docker-entrypoint.sh

Build image

# 运行如下命令
./build.sh 2.0

Release daemon

cat yaml/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: s3-provider
  name: s3-provider
  namespace: open-falcon-monitoring
spec:
  selector:
    matchLabels:
      app: s3-provider
  template:
    metadata:
      labels:
        app: s3-provider
    spec:
      containers:
      - name: s3fuse
        # 修改成自己镜像名称
        image: registry.cn-shanghai.aliyuncs.com/ai-voice-test/open-falcon-s3:2.0 
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh","-c","umount -f /var/s3"]
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        # use ALL  entries in the config map as environment variables
        envFrom:
        - configMapRef:
            name: s3-config
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs
          mountPath: /var/s3:shared
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: mntdatas3fs
        hostPath:
          path: /mnt/data-s3-fs
# 部署
kubectl apply -f yaml/daemonset.yaml
# 查看
kubectl get pod -n open-falcon-monitoring
NAME                READY   STATUS    RESTARTS   AGE
s3-provider-2j6pw   1/1     Running   2          18s

test

cd yaml && cat example_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
  namespace: open-falcon-monitoring
spec:
  containers:
  - image: nginx
    name: s3-test-container
    securityContext:
      privileged: true
    volumeMounts:
    - name: mntdatas3fs
      mountPath: /var/s3:shared
    livenessProbe:
      exec:
        command: ["ls", "/var/s3"]
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
  volumes:
  - name: mntdatas3fs
    hostPath:
      path: /mnt/data-s3-fs
# 查看
# kubectl get pod -n open-falcon-monitoring
NAME                READY   STATUS    RESTARTS   AGE
s3-provider-2j6pw   1/1     Running   2          3m44s
test-pd             1/1     Running   0          73s
# 查看系统
df -h 
s3fs                     256T     0  256T    0% /mnt/data-s3-fs
# 创建文件
kubectl exec -ti test-pd -n open-falcon-monitoring sh
echo "this is a ok!" > /var/s3/ceshi.txt
# 在/mnt/data-s3-fs上
cd /mnt/data-s3-fs/ && cat ceshi.txt
this is a ok!
# 在 AWS S3 上,如下图:

64mrTJ.png

Reference documents

Guess you like

Origin blog.51cto.com/wutengfei/2667106