Docker application containerization guidelines - 12 factor

In the cloud era, more and more traditional applications need to be migrated to the cloud environment, and new applications are also required to adapt to cloud architecture design and development models. And 12-factor provides a standard set of best principles for cloud-native application development.

The application of containerization in container cloud projects mainly refers to the 12-Factor principle.

1 12-Factor Introduction

12-Factor was first proposed and open sourced by Adam Wiggins, founder of Heroku, and perfected by many experienced developers. It combines almost all their experience and wisdom about SaaS applications and is an ideal practice standard for developing such applications. .

12-Factor's full name is The Twelve-Factor App, which defines some basic principles that need to be followed in the design process of an elegant, cloud-friendly Internet application.

12-Factor original address:

https://12factor.net/zh_cn/

    Reference article:

http://www.10tiao.com/html/352/201706/2660395450/1.html

https://www.jianshu.com/p/bbdccd020a1d

 

2 12-Factor detailed explanation

2.1. One benchmark, multiple deployments

1 When an application can be split into different modules, it cannot be called an application, but a distributed system in which the modules correspond to an application.

2 Each application corresponds to a code repository, that is, the benchmark code.

3Each application corresponds to only one benchmark code, but multiple deployments can exist at the same time. Each deployment is equivalent to running an instance of the application. Such as deploying applications to production environments, multiple staging environments, test environments, etc.

2.2. Declare dependencies explicitly

Dependencies such as other tool libraries are required in the running of the application, and all dependencies need to be declared exactly through the dependency list . For example, Ruby's Bundler uses Gemfile for dependency declaration manifest and bundle exec for dependency isolation. In Python, Pip can be used separately as a dependency declaration. One of the advantages of explicitly declaring dependencies is that it simplifies the environment configuration process, just install all dependencies with a single build command and start working.  

 

2.3. Storing configuration in the environment

When applications are deployed in different environments, there are different configurations, such as database connection configuration, certificates of third-party services, etc. Requires strict separation of code and configuration.

This can be achieved by writing the configuration to the configuration file and not included in the version management of the code repository.

12-Factor recommends storing the configuration in an environment variable, which is very convenient to make changes between different deployments without moving a single line of code

 

2.4. Treat backend services as additional resources

For back-end services, local services such as mysql, redis, and third-party services such as mail and api should exist in the configuration as additional resources. Backend services can be switched by changing the configuration.

 

2.5. Strict separation of build and run

Application deployment requires the following three phases:

The build phase refers to the process of turning a code repository into an executable package. The build will use the specified version of the code, fetch and package dependencies, and compile them into binaries and resource files.

The release phase combines the results of the build with the configuration required for the current deployment and can be used immediately in the running environment.

The run phase (or "runtime") refers to the launch of a series of application processes in the execution environment for the selected release. The 12-factor requires the application to strictly distinguish the three steps of build, release, and run.

2.6. Running the application as one or more stateless processes

1 Run the application as one or more stateless processes, and the data that needs to be persisted is placed in the back-end service.

2 Do not use sticky sessions (cache data in a user session into the memory of a process, and route subsequent requests from the same user to the same process).

 

2.7. Serving through port binding

12-Factor requires the application to be fully self-loading without relying on any web server to create a web-oriented service. Internet applications provide services through port binding and listen for requests sent to that port.

Applications that rely on a web server, such as PHP applications that run as a module of Apache HTTPD, and Java applications that run on Tomcat.

Applications for self-loading web services such as Python 's  Tornado , Ruby 's Thin  , Java, and other JVM -based languages ​​such as  Jetty . Entirely by the application code, bind the port, and provide the service.

 

2.8. Extending through the process model

(understand by myself, subject to discussion)

1 Developers assign different jobs to different process types. For example, HTTP requests can be handled by the web process, while the resident background work is handled by the worker process. Timed tasks are handled by clock, so it is very convenient to expand each type of process, as shown in the following figure:

2 The opposite of scaling via the process model is scaling via the threading model, which is a relatively traditional approach. When we start a Java process, we usually set the upper and lower limits of one or more thread pools for it at the application level. When the external load changes, the memory capacity occupied by the process and the number of threads inside the process can be determined in these pre- Expanding between the set upper and lower limits is also called vertical expansion or vertical expansion.

It is now more advisable to use " fixed " processes (for the previous Java application example, fixed memory capacity and thread pool capacity), start more processes when the external load increases, and stop when the external load decreases Part of the process, this approach is what this principle calls scaling through the process model, sometimes referred to as horizontal scaling or horizontal scaling.

2.9. Fast startup and graceful termination for maximum robustness

12-Factor application processes are disposable, meaning they can be started or stopped instantaneously. This facilitates fast and elastic scaling of applications, rapid deployment of changed code or configuration, and robust deployment of applications.

2.10. Keep development, pre-release, and online environments the same as possible

12-Facto application narrowing the difference between local and online is conducive to continuous deployment. There are three main differences:

Minimize time differences: Developers can deploy code in hours or even minutes.

Minimize personnel differences: Developers should not only write code, but should be closely involved in the deployment process and how the code behaves online.

Minimize tool differences: Try to ensure the consistency of the development environment and the online environment.

 

2.11. Treating the log as a stream of events

1 The application should output the events it produces in chronological order with one line per event

2 Applications do not manage log files by themselves. The application is required to output logs to the standard output STDOUT and standard error output STDERR in the form of event streams, and then the runtime environment captures these event streams and forwards them to a dedicated log processing service for processing.

2.12. Background management tasks run as one-time processes

This principle opposes accessing the online environment through SSH to perform management tasks, and recommends that background management tasks be run as a one-time process.

If the management task is to modify the application configuration, it should be operated through the configuration management service; if the management task is a batch task, such as data migration, cleaning or inspection, it should be operated through the batch mechanism of the cloud platform. Platforms will provide this mechanism, such as Kubernetes Jobs.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324587926&siteId=291194637