Get started with front-end containerization in 30 minutes

1 Introduction

Front-end containerization is a technology that packages front-end applications into containers so that they can be deployed and run quickly and efficiently in different environments.

2.Background

The trend of separation of front-end and back-end has taken shape. The complexity of front-end engineering is increasing. There will be differences in the environments and Node.js versions that new and old project deployments rely on. Building obfuscated scripts and static resource files in the production environment depends on the environment deployment service. Access, front-end engineering failed to form a "single artifact" deployment, and the emergence of containers greatly simplified the deployment process.

Front-end containerization can easily manage front-end environment variable injection and running environment (different projects rely on different node environments, node version compatibility is a big problem), save server costs, faster and more convenient version rollback, and multi-architecture deployment. CI/CD automated integrated deployment, DevOps, etc., the benefits are more than you can imagine (manually snickering here).

This article is based on the React project combined with Docker to share the changes brought about by the introduction of container technology on the front end.

3. Application of containerization in github

Github has launched github-action to do containerized ci/cd. I will show below an example of using github-action to make an npm automated package delivery:

  • Create a new .github/workflows/ci.yml file in the project root directory
  • Go to the npm official website to apply for a token (specifically how to apply, please search and solve it yourself)
  • Paste this code into the ci.yml file
  • Push the code to the master branch and it will automatically use ci/cd for deployment!
name: CI
on:
  push:
    branches:
      - master
jobs:
  build:
    # 指定操作系统
    runs-on: ubuntu-latest
    steps:
      # 将代码拉到虚拟机
      - name: Checkout repository
        uses: actions/checkout@v2
      # 指定node版本
      - name: Use Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '16.x'
          registry-url: 'https://registry.npmjs.org'
      # 依赖缓存策略
      - name: Cache
        id: cache-dependencies
        uses: actions/cache@v3
        with:
          path: |
            **/node_modules
          key: ${
    
    {runner.OS}}-${
    
    {hashFiles('**/pnpm-lock.yaml')}}
      - name: Install pnpm
        run: npm install -g [email protected]
      # 依赖下载
      - name: Installing Dependencies
        if: steps.cache-dependencies.outputs.cache-hit != 'true'
        run: pnpm install
      # 打包
      - name: Running Build
        run: pnpm run build
      # 测试
      - name: Running Test
        run: pnpm run test-unit
      # 发布
      - name: Running Publish
        run: npm publish
        env:
          # NPM_TOKEN is access token
         NODE_AUTH_TOKEN: ${
    
    { secrets.NPM_TOKEN }}

4. Build front-end image based on docker

Before learning how to build the front-end project ci/cd, let us first learn how to build the front-end image.

4.1 Install docker

Click here to fly to install docker.
After the installation is complete, execute the following command to check the docker version. Try to bring the buildx version.

docker -v
Docker version 24.0.2, build cb74dfc

4.2 Writing Dockerfile

Here we first need to popularize front-end engineering knowledge. We all know that a project based on npm requires a package.json file, and then execute npm run install to download the package and npm run build to package it. The packaged file cannot be run directly. Start a node service to run, so we will write the most basic image based on node and nginx. The example is as follows

Add an nginx configuration file in the project root directory, named nginx.conf, with the following content

worker_processes  1;
 
events {
    
      
    worker_connections  1024;  
} 
http {
    
      
    sendfile         on;  
    tcp_nodelay       on;  
    keepalive_timeout  30; 
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    server {
    
    
        listen 80;
        server_name localhost;
        root /usr/share/nginx/front/dist;
        autoindex on;   
        autoindex_exact_size off;   
        autoindex_localtime on;
        location / {
    
    
            try_files $uri $uri/ =404;
            index index.html index.htm;
            gzip_static on;
            expires max;
            add_header Cache-Control public;
            if ($request_filename ~* ^.*?\.(eot)|(ttf)|(woff)|(svg)|(otf)$) {
    
    
                add_header Access-Control-Allow-Origin *;
            }
        }
    }
}

Add a docker configuration file in the project root directory, named Dockerfile, with the following content

FROM node:17-buster as builder

WORKDIR /src
COPY ./ /src

RUN npm install -g pnpm \
    && pnpm install \
    && pnpm build

FROM nginx:alpine-slim

RUN mkdir /usr/share/nginx/front \
    && mkdir /usr/share/nginx/front/dist \
    && rm -rf /etc/nginx/nginx.conf
 
COPY --from=builder /src/nginx.conf /etc/nginx/nginx.conf

COPY --from=builder /src/dist /usr/share/nginx/front/dist

EXPOSE 80

Next, use docker build to package the image (if you have a desktop tool, you can see it in the images column of the docker desktop tool after the packaging is successful), and docker run to execute the image (if you have a desktop tool, you can see it in the containers column of the docker desktop tool after the run is successful) , after docker run is successfully run, you can open the browser and enter: http://localhost to view

docker buildx build -t webapp-demo:v1 .

docker run -d -p 80:80 webapp-demo:v1

4.3 How to do pnpm caching based on Dockerfile

Here I quote a passage:
Using multi-stage build, the built image only contains the target folder dist, but there are still some problems. When the package.json file changes, RUN npm i && rm -rf ~/.npm The layer will be re-executed, and after multiple changes, a large number of intermediate layer images will be generated.

To solve this problem, we can further imagine a function similar to a data volume, and mount the node_modules folder when building the image. After the building is completed, the node_modules folder will be automatically uninstalled. The actual image does not contain node_modules. This folder saves us the time of obtaining dependencies each time, greatly increases the efficiency of image construction, and also avoids generating a large number of intermediate images.

The class representative here summarizes: It is to minimize the possibility of intermediate layer images and minimize the docker image size and build time.

Since I am using pnpm for npm package management, I went through the official pnpm documentation about this optimization as follows:

FROM node:20-slim AS base

ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable

COPY . /app
WORKDIR /app

FROM base AS prod-deps
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile

FROM base AS build
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
RUN pnpm run build

FROM base
COPY --from=prod-deps /app/node_modules /app/node_modules
COPY --from=build /app/dist /app/dist
EXPOSE 8000
CMD [ "pnpm", "start" ]

So in the spirit of copying the gourd and using the production nginx configuration, I found the nginx packaging image written by my colleague. You can also execute docker build and docker run to verify, and then my modified code is as follows:

FROM node:17-buster AS builder

ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable

WORKDIR /src
COPY ./ /src

RUN --mount=type=cache,target=/src/node_modules,id=myapp_pnpm_module,sharing=locked \
    --mount=type=cache,target=/pnpm/store,id=pnpm_cache \
        pnpm install

RUN --mount=type=cache,target=/src/node_modules,id=myapp_pnpm_module,sharing=locked \
        pnpm run build

FROM ghcr.io/zboyco/webrunner:0.0.7

COPY --from=builder /src/dist /app

4.4 How to use buildx to create multi-architecture images

The docker buildx tool, to put it bluntly, provides you with the ability. When your host is an x86 64 architecture, and you want to build an image with an ARM64 architecture, you need this tool. It feels a bit like cross-compilation, such as : Cross-compilation of go build, compiling executable programs under win10, can be used on specific linux platforms

Buildx essentially calls the buildkit API, and the build is performed in the buildkit environment. Whether multiple architectures are supported depends on the environment of the buildkit. If the buildkit needs to support multiple architectures, it needs to be executed on the host machine (of course this is not necessary, it is controlled according to the requirements of the build. Docker desktop version does not need to set this setting):

docker run --privileged --rm tonistiigi/binfmt --install all

Here we modify the Dockerfile code above to support multiple architectures. Since the platform is experimental, we need to execute docker pull docker/dockerfile to pull the image first.

# syntax = docker/dockerfile:experimental
# --platform, 会让 builder 只会有一份,且 arch 与宿主机一致
FROM --platform=${BUILDPLATFORM:-linux/amd64} node:17-buster AS builder

ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable

WORKDIR /src
COPY ./ /src

RUN --mount=type=cache,target=/src/node_modules,id=myapp_pnpm_module,sharing=locked \
    --mount=type=cache,target=/pnpm/store,id=pnpm_cache \
        pnpm install

RUN --mount=type=cache,target=/src/node_modules,id=myapp_pnpm_module,sharing=locked \
        pnpm run build

FROM ghcr.io/zboyco/webrunner:0.0.7

COPY --from=builder /src/dist /app

Before executing the package image command, we first check the default builder instance of our machine

docker buildx ls

NAME/NODE       DRIVER/ENDPOINT STATUS  BUILDKIT                              PLATFORMS
default         docker
  default       default         running v0.11.7-0.20230525183624-798ad6b0ce9f linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
desktop-linux * docker
  desktop-linux desktop-linux   running v0.11.7-0.20230525183624-798ad6b0ce9f linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

Using buildx to execute the above script to package the image will report the following error:

docker buildx build --platform linux/arm,linux/arm64,linux/amd64 -t webapp-official-website:v1 .

ERROR: Multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")

Since Docker's default builder instance does not support specifying multiple --platform at the same time, we must first create a new builder instance. At the same time, because pulling images in China is slow, we can use the dockerpracticesig/buildkit:master image configured with the image acceleration address to replace the official image.

If you have a private image accelerator, you can build your own buildkit image based on https://github.com/docker-practice/buildx and use it.

# 适用于国内环境
$ docker buildx create --use --name=mybuilder-cn --driver docker-container --driver-opt image=dockerpracticesig/buildkit:master

# 适用于腾讯云环境(腾讯云主机、coding.net 持续集成)
$ docker buildx create --use --name=mybuilder-cn --driver docker-container --driver-opt image=dockerpracticesig/buildkit:master-tencent

# $ docker buildx create --name mybuilder --driver docker-container

$ docker buildx use mybuilder

We choose the command suitable for the domestic environment to create the solution. You can see that there are more instances named mybuilder-cn.

docker buildx create --use --name=mybuilder-cn --driver docker-container --driver-opt image=dockerpracticesig/buildkit:master

docker buildx ls
NAME/NODE       DRIVER/ENDPOINT  STATUS   BUILDKIT                              PLATFORMS
mybuilder-cn *  docker-container
  mybuilder-cn0 desktop-linux    inactive
default         docker
  default       default          running  v0.11.7-0.20230525183624-798ad6b0ce9f linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
desktop-linux   docker
  desktop-linux desktop-linux    running  v0.11.7-0.20230525183624-798ad6b0ce9f linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

$ docker buildx build --platform linux/arm,linux/arm64,linux/amd64 -t myusername/hello . --push

# 查看镜像信息
$ docker buildx imagetools inspect myusername/hello

Run the image on different architectures to get information about the architecture.

$ docker run -it --rm myusername/hello

5. How to use containerization for front-end environment variable injection

  • Although the front end does not require many environment variables, the basic api baseURL, appName, env are needed.
  • If it is a micro front-end scenario, then the required URLs of other websites are environment variables, and there are quite a lot of them.
  • I vaguely remember that when I first entered the industry, the front-end distinguished the test environment, and the online environment was directly judged by the domain name, for example: inlcudes(url, “.com”)?, and then got isProd to obtain the variables of different environments configured in the project, which seemed Very low
  • Frameworks like vue and react came out later. You can specify –prod when npm run dev, and then get isProd through process to get the corresponding configuration.
  • Now you can inject environment variables directly through containerization, and then use nginx to inject the environment variables in the container into the meta tag content of the front-end project html, and then get the variables from the meta tag
  • If it is a monorepo project, when npm run build, the Dockerfile also needs to use the environment variables in the container to get which project to package.
  • The test environment configures environment variables through ts files, and then combines these environment variable information when the project starts to generate config default.yml. When ci/cd, k8s automatically writes the environment variables configured in default.yml into the container.
  • The online environment directly provides a UI page to configure environment variables, and then calls the API. The back-end API writes the variables into the container through k8s.
  • How to read the configured environment variables through k8s and write them into the container and listen to the decomposition next time

Below is a Dockerfile sample code in a production scenario (the company will probably not kill me!). Here we omit how k8s injects environment variables into the container, and only considers how to read the environment variables from the container (assuming that the environment variables have been injected into the container) )

FROM --platform=${BUILDPLATFORM} hub-dev.rockontrol.com/rk-infrav2/docker.io/library/node:17-bullseye as builder

WORKDIR /src
COPY ./ ./

ARG APP
ARG ENV

ARG PROJECT_GROUP
ARG PROJECT_NAME
ARG PROJECT_VERSION

ARG YARN_NPM_REGISTRY_SERVER

RUN npm install -g --registry=${YARN_NPM_REGISTRY_SERVER} pnpm
RUN pnpm --registry=${YARN_NPM_REGISTRY_SERVER} install


RUN PROJECT_GROUP=${PROJECT_GROUP} PROJECT_VERSION=${PROJECT_VERSION} \
    npx devkit build --prod ${APP} ${ENV}

FROM hub-dev.rockontrol.com/rk-infrav2/ghcr.io/zboyco/webrunner:0.0.7

ARG PROJECT_NAME
COPY --from=builder /src/public/${PROJECT_NAME} /app

Below is a piece of code that shows how nginx combines environment variables and uses nginx configuration to write it into the mate tag of HTML.

#!/bin/sh

# This script is used to start the application

# 初始化一个字符串,用于存储拼接后的值
app_config="${APP_CONFIG}"
ext_config=""

# 遍历所有环境变量
for var in $(env | cut -d= -f1); do
    # 检查变量是否以 "APP_CONFIG__" 开头
    if [ "$(echo "$var" | grep '^APP_CONFIG__')" ]; then
        # 去除变量名前缀 "APP_CONFIG__"
        trimmed_var=$(echo "$var" | sed 's/^APP_CONFIG__//')
        # 使用 eval 来获取变量值并拼接到字符串中
        value=$(eval echo "\$$var")
        app_config="${app_config},${trimmed_var}=${value}"
    fi
done

# 去掉起始的逗号
export app_config=$(echo "$app_config" | sed 's/^,//')

# 解析app_config变量
# 以,分割 app_config
IFS=","
set -- $app_config
# 遍历数组
for config in "$@"; do
    # 以等号分剥数组
    IFS="="
    set -- $config
    # 将单个环境变量单独注入
    ext_config="${ext_config}        sub_filter '__$1__' '$2';\n"
    echo "$1: $2"
done

# Install envsubst
echo "Installing envsubst"
# 将扩展变量替换到 conf.template 中
sed "s@__EXTENT_CONFIG__@${ext_config}@g" /etc/nginx/conf.d/conf-base.template > /etc/nginx/conf.d/conf.template 

envsubst '${PROJECT_VERSION} ${ENV} ${app_config}' < /etc/nginx/conf.d/conf.template > /etc/nginx/conf.d/default.conf

# Start nginx
echo "Starting nginx"
nginx -g 'daemon off;'

nginx configuration code, the placeholder of the mate tag will be replaced with the real environment variable, see sub_filter in the code

server {
    
    
    listen 80;
    listen  [::]:80;
    server_name  localhost;
    root /app;

    #开启gzip
    gzip on;  
    #低于1kb的资源不压缩 
    gzip_min_length 1k;
    #压缩级别1-9,越大压缩率越高,同时消耗cpu资源也越多,建议设置在5左右。 
    gzip_comp_level 6; 
    #需要压缩哪些响应类型的资源,多个空格隔开。不建议压缩图片.
    gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;

    location ~ ^/(static|__built__)/ {
    
    
        root /app;
        expires max;
        proxy_cache static_memory_cache;  # 使用内存缓存
        proxy_cache_valid 200 1d;
        proxy_cache_lock on;
    }

    location / {
    
    
        expires -1;
        try_files $uri /index.html;

        add_header X-Frame-Options sameorigin;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1;mode=block" always;

        sub_filter '__PROJECT_VERSION__' '$PROJECT_VERSION';
        sub_filter '__ENV__' '$ENV';
        sub_filter '__APP_CONFIG__' '$app_config';

        # 需要将以下字符串替换为注入的扩展环境变量
__EXTENT_CONFIG__

        sub_filter_once on;
    }
}

Below is a section on how the front-end reads environment variables from html meta tags

import appConfig from "../../config";

interface IConfig {
    
    
  appName: string;
  baseURL: string;
  version?: string;
  env?: string;
}

export function getConfig(): IConfig {
    
    
  const defaultAppConfig = {
    
    
    appName: "",
    version: "",
    env: "",
    baseURL: "",
  };
  console.log("metaEnv", import.meta.env);

  if (import.meta.env.DEV) {
    
    
    return appConfig;
  } else {
    
    
    const appConfigStr = getMeta("app_config");

    if (!appConfigStr) return defaultAppConfig;

    return parseEnvVar(appConfigStr);
  }
}

function getMeta(metaName: string) {
    
    
  const metas = document.getElementsByTagName("meta");

  for (let i = 0; i < metas.length; i++) {
    
    
    if (metas[i].getAttribute("name") === metaName) {
    
    
      return metas[i].getAttribute("content");
    }
  }

  return "";
}

function parseEnvVar(envVarURL: string) {
    
    
  const arrs = envVarURL.split(",");

  return arrs.reduce((pre, item) => {
    
    
    const keyValues = item.split("=");

    return {
    
    
      ...pre,
      [keyValues[0]]: keyValues[1],
    };
  }, {
    
    } as IConfig);
}

const BASE_URL = getConfig().baseURL;

const instance = axios.create({
    
    
  baseURL: BASE_URL,
  headers: {
    
    
    "Content-Type": "application/json",
  },
  timeout: 60000, // 超时时间60秒
});

Finally, the Dockerfile has been posted, so let’s just post the ci file source code in a real scenario! ! !

stages:
  - ship
  - deploy

ship:
  stage: ship
  image: hub-dev.rockontrol.com/rk-infrav2/gitlab-runner-buildx:0.0.0-b0450fe
  # variables:
  #   MULTI_ARCH_BUILDER: 1
  before_script:
    - echo "${DOCKER_PASSWORD}" | docker login "${DOCKER_REGISTRY}" -u="${DOCKER_USERNAME}" --password-stdin
    - BUILDKIT_NAME=node-buildkit hx buildx ci-setup
  script:
    - export PLATFORM=linux/amd64,linux/arm64
    - |
      if [[ -f ./.platform ]]; then
        source ./.platform
      else
        echo "WARNING, there is no .platform in project, USE default PLATFORM=${PLATFORM} "
      fi
    - hx buildx --with-builder --push --platform=${PLATFORM}
  tags:
    - webapp

deploy:
  stage: deploy
  script:
    - hx config
    - hx deploy
  dependencies:
    - ship
  tags:
    - webapp

6. Kubernetes deployment of front-end applications

Kubernetes is an open source container orchestration platform that can automate the deployment, scaling and management of containerized applications. Here are the steps to deploy a front-end application to a Kubernetes cluster:

6.1 Create Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend-app
  template:
    metadata:
      labels:
        app: frontend-app
    spec:
      containers:
        - name: frontend-app
          image: my-frontend-app:latest
          ports:
            - containerPort: 3000

6.2 Create Service

apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  selector:
    app: frontend-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer

6.3 Deploy to Kubernetes cluster

Use the kubectl command to deploy the application to the Kubernetes cluster:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Now, your front-end application is running in the Kubernetes cluster and can be accessed externally through the LoadBalancer type Service.

7. About the front-end React project architecture

7.1 Core technology

7.2 Automatically obtain api request function based on openapi

// src/core/openapi/index.ts

// 示例代码
generateService({
    
    
  // openapi地址
  schemaPath: `${
     
     appConfig.baseURL}/${
     
     urlPath}`,
  // 文件生成目录
  serversPath: "./src",
  // 自定义网络请求函数路径
  requestImportStatement: `/// <reference types="./typings.d.ts" />\nimport request from "@request"`,
  // 代码组织命名空间, 例如:Api
  namespace: "Api",
});

7.3 Call interface (react-query), support automatic loading and interface request linkage

// HelloGet是一个基于axios的promise请求
export async function HelloGet(
  // 叠加生成的Param类型 (非body参数swagger默认没有生成对象)
  params: Api.HelloGetParams,
  options?: {
    
     [key: string]: any },
) {
    
    
  return request<Api.HelloResp>('/gin-demo-server/api/v1/hello', {
    
    
    method: 'GET',
    params: {
    
    
      ...params,
    },
    ...(options || {
    
    }),
  });
}

// 自动调用接口获取数据
const {
    
     data, isLoading } = useQuery({
    
    
  queryKey: ["hello", name],
  queryFn: () => {
    
    
    return HelloGet({
    
     name: name });
  },
});

export async function HelloPost(body: Api.HelloPostParam, options?: {
    
     [key: string]: any }) {
    
    
  return request<Api.HelloResp>('/gin-demo-server/api/v1/hello', {
    
    
    method: 'POST',
    headers: {
    
    
      'Content-Type': 'application/json',
    },
    data: body,
    ...(options || {
    
    }),
  });
}

// 提交编辑数据
const {
    
     mutate, isLoading } = useMutation({
    
    
  mutationFn: HelloPost,
  onSuccess(data) {
    
    
    setName(data?.data || "");
  },
  onError() {
    
    
    // 清除queryKey为hello的接口数据缓存,自动重新获取接口数据
    queryClient.invalidateQueries({
    
     queryKey: ['hello'] });
  }
})

mutate({
    
     name: "lisi" });

8. Front-end React code CLI

9. Conclusion

  • Introduced the basic configuration of gitlab-action in the front-end npm field
  • Introduces the writing of front-end Dockerfile files, as well as pnpm's optimization plan in docker, and how to use buildx to generate front-end multi-architecture images.
  • Introduces the use of front-end environment variables in production scenarios
  • Introducing the technical architecture plan of the front-end React project

Guess you like

Origin blog.csdn.net/luo1055120207/article/details/132742642