The front end of the development process of deployment: Senior Advanced front-end

Speaking of the front end of a slash and burn, it is certainly tight with the front-end engineering topic. With react / vue / angular, development es6 +, webpack, babel, typescript and node front end has been gradually replacing the last script developed cdn lead the way, setting off a project of this big wave. Thanks to engineering development and open source community good ecological, availability and efficiency of front-end applications has been greatly improved.
Used to be the front end of slash and burn, and that the deployment of front-end applications in the past is slash and burn. What benefited from the development of front-end application deployment, with the front-end engineering of a by-product?
This is just part and, more important reason is the rise of devops.
In order to more clearly understand the history of the development of the front end of the deployment, understand the deployment of peacekeeping fortune front-end (or, more broadly, business developer) division of responsibilities, each time the front end of the deployment of changes, can consider two questions

Cache, front-end applications with the http response header by whom? Thanks to the development of the project, can get the file with the hash value of the package can do permanent caching
across domains, / api proxy configuration Who with? In the front-end development environment can open a small service, webpack-dev-server enable cross-domain configuration, the production environment and that it

These two problems are problems of high frequency front end of the interview, but whether the right to speak lies in the hands of front-end
time came React just developed this year, this time has been used React to develop applications using webpack to package. But the deployment of front-end is still slash and burn

This article requires that you have a certain Docker, DevOps and front-end engineering knowledge base. If not, this series of articles, and personal Docker section will guide the operation and maintenance of the server you help.

Slash and burn
a springboard machine
a production server
to a deployment script
front-end tune his webpack, operation and maintenance happy to send email with the deployment of a deployment script, without thinking first set of back-end template, the first a front-end can be deployed independently. Think of their own to further expand the base plate, the front end can not help but laughed
operation and maintenance shining hair over the front end of the deployment of e-mail, over and over again took the code, change the configuration, written try_files, with the proxy_pass.
At this time, the front end of the static files are hosted by nginx, nginx profile substantially long like this
Server {
the listen 80;
server_name shanyue.tech;

LOCATION / {
# avoid non-root path 404
try_files $ $ URI URI / /index.html;
}

To solve cross-domain

location /api {
proxy_pass http://api.shanyue.tech;
}

Permanent cache configuration file with a hash value

location ~* .(?:css|js)$ {
try_files $uri =404;
expires 1y;
add_header Cache-Control “public”;
}

~ ^ + ... + LOCATION $ {.
try_files $ uri = 404;
}
}
copy the code, but sometimes ... often do not run up
the operation and maintenance complaining about the deployment scripts front-end node version of the standard is not good, front-end test environment clamored no problem
this time operation and maintenance costs required a lot of effort on deployment, or even deploy a test environment, but also the front end takes a lot of effort on how to deploy on operation and maintenance. This time due to the fear of affecting the online environment, on line late at night often choose, operation and maintenance of front-end and exhausted
but always so
Lu said, has always been so, it would be against it.
This time, regardless of configuration or cross-domain cache configuration, operation and maintenance are to manage the operation and maintenance do not understand the front end. But the configuration is provided in the front end, while the front end is not familiar with nginx
use docker build mirroring
the introduction docker, to a large extent solved the deployment script can not run this big BUG. dockerfile That deployment scripts, deployment script that is dockerfile. This is to a large extent eased the friction with the operation and maintenance of front-end, after all, more and more reliable front-end, at least the deployment script there is no problem (laughs
this time, the front end is no longer available static resources, but to provide services, an http service
front-end wrote dockerfile roughly like this long
FROM node: alpine

Representative of the production environment

ENV PROJECT_ENV production

Many package will be based on this environment variable, make a different behavior

In addition, the package will also be made in webpack optimized for this environment variable, but create-react-app when packaging will write this environment variable dead

ENV NODE_ENV production
WORKDIR /code
ADD . /code
RUN npm install && npm run build && npm install -g http-server
EXPOSE 80

CMD http-server ./public -p 80
copy the code alone have dockerfile also not up and running, the other front-end began to maintain a docker-compose.yaml, operation and maintenance to execute commands docker-compose up -d start front-end applications. The front end of the first to write dockerfile and docker-compose.yaml, play in increasingly important role in the deployment process. Think of their own to further expand the base plate, the front end can not help but laughed
Version: "3"
Services:
shici:
Build:.
EXPOSE:
- 80
copy the code nginx operation and maintenance of a substantially elongated configuration file like this
Server {
the listen 80;
server_name Shanyue .tech;

location / {
proxy_pass http://static.shanyue.tech;
}

LOCATION / API {
proxy_pass http://api.shanyue.tech;
}
}
copy the code operation and maintenance in addition to the configuration nginx, but also to execute a command: docker-compose up -d
this time to ponder two questions foremost article

Cache, due to the conversion from static files for the service, front-end cache control handed over to the beginning (but in the mirror http-server is not suitable to do this)
cross-domain, cross-domain configuration nginx still in operation and maintenance

The front part can do what he should do in the, this is a very happy thing
, of course, the front end for improved dockerfile is also a slow process of evolution, and that this time mirroring what is the problem?

Construction of the mirror too large
to build the mirror too long

Construction of multi-stage optimization mirroring
the middle actually experienced many ups and downs, in which the process of how, refer to my other article: How to Use docker deploy front-end application.
Which is optimized in two main aspects of the above-mentioned

Constructed by the mirror volume becomes 10M + 1G +
constructed by the time the mirror becomes 5min + 1min (depending on the complexity of the project, most of the time at build time and static resource upload time)

FROM node:alpine as builder

ENV PROJECT_ENV production
ENV NODE_ENV production

WORKDIR /code

ADD package.json /code
RUN npm install --production

ADD . /code

npm run uploadCdn is to upload a static resource to a script file on oss, oss future will accelerate the use of cdn

RUN npm run build && npm run uploadCdn

Select the base image a smaller volume of

Nginx the FROM: Alpine
COPY code --from = Builder / public / index.html code / public / the favicon.ico / usr / Share / Nginx / HTML /
COPY code --from = Builder / public / static / usr / Share / Nginx / html / static
copy the code that how to do it

First ADD package.json / code, and then after npm install --production Add all files. Full use of mirrored cache, reducing the time to build
multi-stage build, greatly reduced the volume of the mirror

It also can have some small optimizations, such as

Cache npm npm private base image or warehouse, reducing the install npm time, reducing the time to build
npm install --production installed only necessary packets

Optimized front-end watch their dockerfile, operation and maintenance was also noisy few days ago thinking, saying that half the disk space are mirrored to the front of the account, thinking of his savings in the mirror several orders of magnitude the volume of front-end, the company seems save a lot of server overhead, thinking about their own foundation to further expand the disk, and can not help but smile and laugh
this time to ponder two questions foremost article

Cache, cache control by the front end, disposed on the buffer oss, oss will be used to accelerate the cdn. This time by a front-end cache write scripts to control
cross-domain, cross-domain configuration nginx still in operation and maintenance

CI / CD and gitlab
this time the front overflowing sense of accomplishment, operation and maintenance of it? Operation and maintenance of the ground line is still over and over again, repeating the three actions over and over again to deploy

Pull the code
docker-compose up -d
restart nginx

Operation and maintenance can no longer think so anymore, so he introduced the CI: the existing code repository gitlab supporting gitlab ci

CI, Continuous Integration, Continuous Integration
CD, Continuous Delivery, continuous delivery

What is important is not the CI / CD is important now not to follow the operation and maintenance business line is gone, does not require the deployment of staring at the front. These are the things CI / CD's, and it is used for automated deployment. The above-mentioned three things to the CI / CD
.gitlab-ci.yml is gitlab the CI profiles, it probably looks like this long
Deploy:
Stage: Deploy
only:
- Master
Script:
- Docker Compose-up --build -d
Tags:
- shell
copy the code CI / CD is not only more liberated deploy business project, before delivery also greatly enhanced the quality of the business code, which can be used to lint, test, package security checks, and even multi-multi-environment characteristics deployment, I will write this part of the thing in my future article
I rendering of a server project shfshanyue / shici had previously been to docker docker-compose gitlab-ci way / / deployed in my server, are interested can look at its configuration file

shfshanyue / shici: Dockerfile
shfshanyue / shici: Docker-compose.yml
shfshanyue / shici: gitlab-ci.yml

If you have a personal server, also suggest you do the front-end applications and back-end interfaces supporting a service they are interested in, and supporting CI / CD deploy it on their own server
and if you want to combine github do CI / CD , then you can try github + github action
Alternatively, you can try drone.ci, how to deploy can refer to my previous article: continuous integration on github program introduction and deployment of drone
use kubernetes deployment
with business growing, mirroring more and more, docker-compose been less able to cope, kubernetes seasonal out. Then the server from Taiwan has become a multi-stage, multiple servers will be distributed issue
an emergence of new technologies, while addressing the previous problems would introduce complexity.
k8s deployment benefits are clear: health checks, rolling upgrades, elastic expansion, rapid rollback, resource constraints, and improve monitoring, etc.
What new problems that we encounter is that?
Construction of the mirror server, the server provides container services, do continuous integration server is the one!
The need for a proprietary image repository, which is the operation and maintenance of the thing, harbor soon build a good operation and maintenance, but for the front end of the deployment, the complexity and increase the
take a look at the previous process:

Front-end configuration dockerfile and docker-compose
the production environment servers CI runner pull the code (can be seen as the previous operation and maintenance), docker-compose up -d start the service. Then restart nginx, do reverse proxy, external services

Former process has a problem: Build the mirror server, the server provides container services, do continuous integration server is the one! Therefore the need for a proprietary image repository, a cluster can access k8s continuous integration server
combination k8s process improvement after the procedure is as follows

Front-end configuration dockerfile, build a mirror, the mirror to push warehouse
operation and maintenance for the front-end application configuration k8s resource configuration files, re-pull mirroring kubectl apply -f, deploy resources

Asked the front-end operation and maintenance, you do not need to be expanded under your foundation plate, to write about the front end of k8s resource configuration file, and listed several articles

Use k8s deploy your first application: Pod, Deployment and Service
use k8s configuration for your application domain: Ingress
use k8s with https for your domain

After the rear end of the front end looked at more than a dozen k8s profile, shook his head and said forget forget
this time, gitlab-ci.yaml almost long like this, rights profile by one person operation and maintenance management
Deploy:
Stage: Deploy
only:
- Master
Script:
- Docker Build -t harbor.shanyue.tech/fe/shanyue
- the Push harbor.shanyue.tech/fe/shanyue Docker
- kubectl the Apply -f HTTPS: //k8s-config.default.svc.cluster.local /shanyue.yaml
Tags:
- shell
copy the code this time to ponder two questions foremost article

Cache, cache is controlled by the front
cross-domain, cross-domain still operation and maintenance control, control Ingress in the configuration file backend k8s resource

Use helm deployment
then the front end of the operation and maintenance has less contact, except for the occasional need for a new project from the operation and maintenance favor outside
but did not last long, suddenly one day, found himself even a front-end environment variables passed no law! So often modify the configuration file find operation and maintenance, operation and maintenance are also fed up
so have the helm, if explain it in one sentence, then it is a resource profile templates with k8s function. As the front-end, you just need to fill in the parameters. More detailed contents can refer to my previous article using the helm k8s deploy resources
if we use bitnami / nginx as front end like this long to write the configuration file might Chart helm
Image:
Registry: harbor.shanyue.tech
Repository: Fe / Shanyue
Tag : 8a9ac0

ingress:
enabled: true
hosts:

  • name: shanyue.tech
    path: /

tls:

  • hosts:

    • shanyue.tech
      secretName: Shanyue-TLS

    livenessProbe:

    httpGet:

    path: /

    port: http

    initialDelaySeconds: 30

    timeoutSeconds: 5

    failureThreshold: 6

    readinessProbe:

    httpGet:

    path: /

    port: http

    initialDelaySeconds: 5

    timeoutSeconds: 3

    periodSeconds 5

Copy the code this time to ponder two questions foremost article

Cache, the cache is controlled by a front end
cross-domain, and inter-controlled by the rear end, the rear end disposed in the configuration file values.yaml in Chart

Then the front end to the duty of the operation and maintenance of it?
The front end needs to be done are:

Write dockerfile the front of the building, this is just a one-time job, but with reference
to specify parameters to use when deploying helm

That operation and maintenance to do it

Helm chart provides a front-end for all items used, even without providing, operation and maintenance lazy if it is on the use of bitnami / nginx it. Is a one-time work
provides a tool based on the helm, the prohibition of excessive service authority, even without providing, operation and maintenance lazy if it is used directly helm

The FEC can focus on their business, operation and maintenance can focus on their own cloud native, has never been such a clear division of responsibilities
unified front-end deployment platform
later that the nature of the operation and maintenance of front-end application is a bunch of static files, more single, easy to unity technology, to avoid the front end of each of the uneven image quality. So prepare a unified operation and maintenance of node base image, do a front-end unified deployment platform, and what this platform can do it

CI / CD: When you push specific code branch to the warehouse automatically deploy
http headers: You can customize the http header resources, which can be done cache optimization
http redirect / rewrite: If a nginx, so you can configure / api, resolve cross domain problem
hostname: you can set the domain
CDN: put your static resources to push CDN
HTTPS: prepare a certificate for you
Prerender: binding SPA, do pre-rendered

No longer need to build the front mirror, upload CDN, he just need to write a profile on it, roughly like this long
Build:
the Command: npm RUN Build
dist: / dist

hosts:

  • name: shanyue.tech
    path: /

headers:

  • location: /*
    values:
    • cache-control: max-age=7200
  • location: assets/*
    values:
    • cache-control: max-age=31536000

redirects:

  • from: / API
    to: https://api.shanyue.tech
    Status: 200
    Copy Code At this point, the front just need to write a configuration file, you can configure the cache, configure the proxy, should do everything belongs to the front to do, and transport Victoria also no longer need to worry about what the front end of the deployment of the
    front-end written just looked at his profile, saw it the way ...
    but most manufacturers only have such a complete front-end deployment platform, if you are interested in it, you can under attempt netlify, you can refer to my article: use netlify deploy your application front-end
    server and back-end rendering deploy
    static resource on most applications is essentially the front end, and the rest is a small part of the server-side rendering, server side rendering of nature on a back-end service, its deployment can be seen as the backend deploy
    the back-end deployment of the situation is more complex, such as

Configuration services, back-end need access to sensitive data, but can not put sensitive data on the code repository. You can maintain the environment variables, consul or k8s configmap in the
uplink and downlink services, you need to rely on a database, upstream service
access control, restrict IP, black and white list
RateLimit
etc.

I will share how to deploy a backend k8s in a future article

Published 22 original articles · won praise 0 · Views 248

Guess you like

Origin blog.csdn.net/A669MM/article/details/104791834