12-Factor, build native software application methodology

Official Address: https://12factor.net/zh_cn/

Principle 1: a code number, multiple deployment

This principle regardless of mode or other micro-services software development model is very basic, it was listed as the first of 12 principles, the principles include the following four sub-principles :

  1. Use code library management code, usually Git or SVN, this requires a very early, I believe readers will comply with this book.

  2. A code number (i.e., a library of code) corresponding to one application. If a reference code can be compiled by a plurality of applications, it should be considered that the reference code is split into multiple parts according to the application; if an application requires more than a reference code, or the code number consider multiple combined or considered the application code is split into a plurality of reference press.

  3. It does not allow multiple applications to share a code number, if you really need to share, it would need to share the stable version of the code number for the release of the library, and then loaded through dependency management strategy.

  4. Multiple copies of the same application can be deployed using different versions of the same code base, but can not use a different code number, similar in principle 2, the application uses a different reference code should not be considered the same application.

Violation of the principle of sub 2 and 3, code management and compilation work will cause trouble:

  1. If a reference code can compile multiple applications, then certainly between these applications there is no clear dependency, over time, this dependency will become increasingly chaotic, so that a modified application Code, other applications will bring unpredictable effects. This benchmark code is obviously extremely difficult to maintain.

  2. And dividing a reference application code is very similar, but also a reflection of the system boundary, if an application needs from multiple reference code is compiled, then the application of internal and external border issues will be a problem in most cases. If the border problem does not exist, please reference the code multiple copies combined in one, rather than maintain this quirky design.

  3. If more than one application is not through class libraries, but directly shared a code number, then this is shared by the code number it will be difficult to maintain, modify the code for this benchmark must carefully consider the impact of multiple applications may cause. The right way is to publish this reference code library, to maintain clear boundaries and interfaces for other applications calling convention.

Principle 2: explicitly declare dependencies

  • Packaging system
  • Package type: site packages, vendoring | bunding
  • Reliance list: application under 12-Factor rules do not implicitly rely on system-level libraries
  • Dependence isolation: no matter what tools, rely on statements and rely on isolation must be used together, or can not meet the 12-Factor Specification
  • Build commands: One of the advantages to explicitly declare dependencies is to simplify the configuration process for the new entrants environment developers.

Principle 3: storage configuration in the environment

First, be clear, here refers to the configuration and deployment configuration-related environment, such as:

  • Connection configuration and location information back-end database, messaging proxy, caching systems and other services, such as URL, user name, password, and so on.

  • Certificate third-party services.

  • Each unique deployment configuration, such as: domain name, number of connections, and the scale of resources to deploy the target environment-related JVM parameters.

All deployment information are the same, for example, the principle mentioned in 2-dependent information, within the scope of the present principles are not discussed. Although some vary in different deployments, however, and business-related information, such as the conversion ratio of capital settlement, nor the principles discussed in this configuration.

I think most developers know how to achieve the separation of code and configuration by using the configuration file, but this approach still has some drawbacks, such as:

  1. Profiles can easily be accidentally developers to submit code base, resulting in passwords, certificates and other sensitive information leakage. Submit to the configuration file in the code base is also likely to be deployed and applied together to the target environment, is likely to lead to a wrong configuration application in the target environment or cause configuration conflicts.

  2. Configuration files are scattered in different directories, and there are different formats (format of the configuration files are often associated with development languages ​​and frameworks), this will configure unified management difficult.

To avoid this problem, this principle requires a storage configuration in the environment. A typical method is to configure the storage in the environment variables, which makes complete separation of code and configuration, the format development languages ​​and also no connection frame, and will not be submitted to the erroneous code base. You can also use this type of Spring Cloud Config Server Configuration Manager to configure push services, and versioning and change management is also reason to configure them together.

Principle 4: the back-end services as additional resources

Herein refers to a rear end of the service application running various services depend, such as databases, message broker, system cache, the native applications of the cloud, but tend to have a log collection service, service objects are stored, and by a variety of API access services; as additional resources refers to these services as external resources through the network calls.

This principle has several meanings as follows:

  1. Do not use these services on a local: Cloud native application itself requires the application of non-state, then the state information should be stored in an external service (see immutable server). At the same time, micro-services model requires the right to select a single application to achieve reliability and scalability, if the application is placed in the local database, then the micro-service platform will not be able to replace the failed instance applications achieve high availability applications can not be automated through scale-out achieve scalability, because it contains two properties completely different software (application and database) the application example, can not be used to scale both the same way. In addition, if these services on a local application, it can not be simplified by the ability to take full advantage of the cloud platform provides operation and maintenance work, for example, if the application is placed in the local database, instead of using the database service cloud platform, then obviously you can not use database service provides automatic backup, security, and high availability characteristics.

  2. URL or service registration / authentication center to access the back-end services through: application should be able, without making any code changes, and any specific implementation for deployment in different target environments, applications and back-end services should not exist tightly coupled relationship.

  3. Similar principles "explicitly declared dependencies", the application can best be displayed declare their use of these back-end services to facilitate cloud platform service resources automatically bound, when a failure occurs in the back-end services, cloud platforms It can be automatically restored.

Principle 5: the strict separation of build, publish and run

In this principle, build, publish and run these three concepts may be different and the former, it is necessary first of all be clear:

  • Build refers to the application code into the body during the execution of: will pull a specific version of the code and build dependencies, compile it into a binary file (for a compiled language), and packaged together and resource files.

  • Release refers to the result of the build and deploy configuration combined and placed into operation in the environment.

  • Run refers to the announcement of the results start to run in the environment one or more processes.

This principle requires to build, publish and run strictly distinguish between these three steps:

  1. Do not directly modify the code to run for state or patching applications, because these changes is very difficult to build synchronous step back, this time code running state has become "the only existing copies." At the same time, should not modify the configuration application at runtime, configuration changes should be limited to the publishing stage (see immutable server).

  2. This step should be very simple to run, only to start the process, the associated resource files should be limited to the construction phase, the configuration of the combination should be limited to the publishing stage.

At the same time, each release should be issued corresponds to a unique ID, like a released version should only be added to the books, once released can not be modified. Do the benefits are:

  1. Code that runs every state can find its source in the corresponding release and build phase, which is to achieve re-released, automatically replace the failed instance, after the release of version rollback mechanisms such as error basis.

  2. Step very simple operation, so that the hardware restart, and the case of the example malfunction lateral expansion, the application may be achieved simply and quickly restarted.

Principle 6: one or more stateless processes running applications

This principle requires that the application process is not to save internal state information, any state information should be stored in an external database services, caching systems and the like. Examples of data sharing between applications should be carried out by an external service database and cache systems, direct data sharing not only violates the principle of non-state, also introduced a serialized single point, which would bring disorder to scale applications.

In the micro-service mode, the application process should not be in their own internal cache data for future requests to use because micro multi-service model run application examples, future requests will probably be routed to other examples, this time although you can use sticky the request remains sluggish session on the same instance, but whether it is cloud-native application or micro-service model are strongly opposed to the use of sticky session for the following reasons:

  1. Difficult to achieve load balancing sticky session because the balance of sticky sessions depends not only on load balancing strategy, and also related to the behavior of the session itself, for example, there may be some instances on application session has a lot of quit, while others sessions on the instance is still active, the load will be in two parts instance unbalanced state, and the load balancer can not transfer an active session to the application instance idle.

  2. The new application instance is started will not increase the overall processing power of the application immediately because these can only undertake a new instance of the new session, the old session remains sticky on the old application examples.

  3. 应用实例退出会导致会话丢失,所以在实例发生故障时,即使云平台可以对故障实例进行自动替换,也会导致用户数据丢失。即使是对应用实例进行人工维护,也需要在维护之前对该实例上的会话进行转移,这往往意味着需要编写复杂的业务代码。

    在传统模式下,可以通过在双机之间进行会话复制来实现对用户无感知的单机下线维护(虽然会付出处理能力减半的代价),但是在微服务模式下,应用的实例数量往往远不止两个,在大量的实例之间进行会话复制会使实例之间原本非常简单的逻辑关系复杂化,此时将无法通过云平台对其进行无差别的自动化维护。另外,在实例之间进行会话复制也意味着实例之间存在着直接的数据共享,这会为应用的横向扩展带来障碍。

所以,粘滞会话是应用实现可用性和扩展性的重要障碍,使用粘滞会话显然是种得不偿失的选择。更好的实现方式是将会话信息存储在缓存服务中。

原则7:通过端口绑定提供服务

服务端应用通过网络端口提供服务,这点毋庸置疑,但是本原则还有如下两个深层次的含义:

  1. 无论是云原生应用还是微服务模式都要求应用应该完全自我包含,而不是依赖于外部的应用服务器,端口绑定指的是应用直接与端口绑定,而不是通过应用服务器进行端口绑定。

    如果一定要使用应用服务器,那就使用嵌入式应用服务器,无论是云原生应用还是微服务模式都极力反对将多个应用放置于同一个应用服务器上运行,因为在这种模式下,一个应用出错会对同一个应用服务器上的其他应用造成影响,也无法针对单一应用做横向扩展。

  2. 端口绑定工作应该由云平台自动进行,云平台在实现应用到端口的绑定之外,还需要实现内部端口到外部端口的映射和外部端口到域名的映射。在应用的整个生命周期内,应用实例会经历多次的重新部署、重启或者横向扩展,端口会发生变化,但URL会保持不变。

原则8:通过进程模型进行扩展

与通过进程模型进行扩展相反的方式是通过线程模型进行扩展,这是一种相对较为传统的方式,典型的例子是Java应用。当我们启动一个Java进程的时候,通常会通过JVM参数为其设置各个内存区域的容量上下限,同时还可能会在应用层面为其设置一个或者多个线程池的容量上下限,当外部负载变化时,进程所占用的内存容量和进程内部的线程数量可以在这些预先设置好的上下限之间进行扩展,这种方式也被称为纵向扩展或者垂直扩展。

但是这种方式存在一些问题,首先,在进程的内存容量和线程数量提高时,应用的某些性能指标可能不会得到同步提高,甚至可能会下降(这往往是因为程序对某些无法扩展的资源进行争用所造成的),这种参差不齐的性能扩展对外部负载提高的承接能力会很不理想,有时甚至会适得其反;

其次,为了使进程本身可以完成纵向扩展,还需要在虚拟机层面或者容器层面为其预留内存资源和对应的CPU资源,这会造成大量的资源浪费(当然,也可以使虚拟机或者容器跟随进程一起进行纵向扩展,这在技术上是可行的,但是会为虚拟机或者容器管理平台的资源调度造成一些不必要的困难,例如频繁的虚拟机迁移或者容器重启)。

所以,现在更为推崇使用“固定的”进程(对前面Java应用的例子来说,就是固定的内存容量和线程池容量),在外部负载提高时,启动更多的进程,在外部负载降低时,停止一部分进程,这种方式就是本原则所说的通过进程模型进行扩展,有时候也被称为横向扩展或者水平扩展。

这种扩展方式的好处是,在进程数量增加的时候,应用的各种性能指标会得到同步的提高,这种提高即使不是线性的,也会按照一种平滑和可预期的曲线展开,可以更为稳定的应对外部负载的变化。

云原生应用和微服务模式极力推崇将通过进程模型进行扩展作为唯一的扩展方式,除了前文所述,还有一个原因是进程是云平台可以操作的最小运行单元(当然,可以通过其他技术手段去操作线程,但是那不会成为云平台的通用技术特性),云平台可以根据各个层面的监控数据,通过预设规则决定是否为应用增加或者减少进程,例如,当前端的负载均衡器检测到访问某个后端应用的并发用户数超过某个阈值时,可以立即为这个后端应用启动更多的进程,以承接更大的负载,同时还可以选择是否对该应用后端的数据库进行扩展。

如果此时选择对应用进行纵向扩展,则云平台既不知道应用处理能力的变化,也无法对这种变化进行预期管理,更无法使应用的前后端对这种变化进行联动,即该应用的扩展行为脱离了云平台的管理。在微服务模式下,如果大量的进程都采用纵向扩展方式,则会为平台的资源调度带来极大的混乱。

注3:该原则似乎更适合被称为横向扩展原则,但是为了和12原则的原文保持一直,这里我们仍然将其称为“通过进程模型进行扩展”。

原则9:快速启动和优雅终止可最大化健壮性

该原则要求应用可以瞬间(理想情况下是数秒或者更短)启动和停止,因为这将有利于应用快速进行横向扩展和变更或者故障后的重新部署,而这两者都是程序健壮性的体现。

前文不止一次提到过应用的快速启动,在理念章节的开头,我们提到过平价的进程生成对多道程序设计至关重要,而微服务模式在某种程度上可以认为是多道程序设计在Web领域和分布式系统下的进一步扩展,这里所说的平价进程生成指的是操作系统的一种特性,是应用快速启动的基础,除此之外为了保证应用可以在数秒内完成启动,还需要大量的优化工作,需要开发人员掌握复杂的调优技术与工具,有些工作必须在应用的初始设计阶段完成,例如:如果应用体积过大或者是引用了太多的库文件,那么再多的后期优化也无法将启动时间降低到数秒以内。

“原则5:严格分离构建、发布和运行”中我们还提到,应用的运行步骤应该非常简单,这里的“简单”也隐含着快速的意思,目的是为了在硬件重启、实例故障和横向扩展等情况下,应用可以快速的实现重启。除此之外,“原则6:以一个或多个无状态的进程运行应用”也与应用的快速启动有关,遵守无状态原则,使用云平台提供的缓存服务,而不是在应用内部加载缓存,可以避免在应用启动期间进行耗时的缓存预热。

比起应用的快速启动,优雅终止(Graceful Shutdown)需要考虑的问题会更为广泛一些。优雅终止需要尽可能降低应用终止对用户造成的不良影响(对于微服务应用,用户可能是人,也可能是其他微服务)。

对于短任务来说,这一般意味着拒绝所有新的请求,并将已经接收的请求处理完毕后再终止;对于长任务来说,这一般意味着应用重启后的客户端重连和为任务设置断点并在重启后继续执行。除此之外,优雅终止还需要释放所有被进程锁定的资源,并对事务的完整性和操作的幂等性做出完备的考虑。

最后,应用还必须应对突如其来的退出,在硬件出现故障时或者进程崩溃时,应用需要保证不会对其使用的数据造成损坏,遵守无状态原则、将数据交由后端服务处理的应用可以很容易的将应对突然退出的复杂度外部化。

  • 12-Factor应用的进程是易处理(disposable)的,意思是说它们可以瞬间开启或停止。这有利于快速、弹性的伸缩应用,迅速部署变化的 代码 或 配置 ,稳健的部署应用。
  • 进程应当追求最小启动时间
  • 进程一旦接收终止信号(SIGTERM)就会优雅的终止
  • 进程还应当在面对突然死亡时保持健壮,例如底层硬件故障
  • 12-Factor应用都应该可以设计能够应对意外的、不优雅的终结。

原则10:开发环境与线上环境等价

本原则的浅层次含义是要求在开发环境和线上环境中使用相同的软件栈,并尽可能为这些软件栈使用相同的配置,以避免“It works on my machine.”这类问题。本原则反对在不同的环境中使用不同的后端服务,虽然可以使用适配器或者在代码中做出兼容性考虑以消除后端服务的差异,但是这将牵扯开发人员和测试人员大量的精力以保证这些适配器和代码确实可以按预期工作,在应用的整个开发周期中,这将积累极大的额外工作量,是一种非常不必要的资源浪费。

近年来个人电脑的性能大幅提高,开发人员一度得以在本地开发环境中运行与生产环境中一致的软件栈,而不是像曾经那样采用轻量的替代方案。但是随着云原生应用和微服务模式的流行,情况又发生了微妙的变化:开发微服务时需要依赖云平台提供的基础服务和其他微服务,越来越难以把这些服务完整的运行在本地,与此同时,完全的在线开发愈发成为一种趋势,那样的话至少在软件栈上开发环境和线上环境就真的没有任何区别了。

在我编写这段文字的时候,Red Hat公司刚好在洽购在线开发环境创业公司Codenvy用以充实他们的云平台产品OpenShift,而另一家与Codenvy类似的创业公司Cloud9在差不多一年前被Amazon公司收购。

本原则的深层次含义是尽量缩小开发环境和线上环境中时间和人员的差异。开发环境中的代码每天都在更新,而这些更新往往会累积数周甚至数月才会被发布到线上环境,这是开发环境和线上环境在时间上的巨大差异;开发人员只关心开发环境,运维人员只关心线上环境,开发人员和运维人员在工作上鲜有交集,这是开发环境和线上环境在人员上的巨大差异。

对于前一个差异,本原则要求更为密集和频繁的向线上环境发布更新,要求建立机制以保障开发人员可以在数小时甚至数分钟内既可将更新发布到线上,这也正是本章理念部分中持续交付所提倡的;对于后一个差异,本原则要求开发人员不能只关心开发环境中自己的代码,更要密切关注代码的部署过程和代码在线上的运行情况,这也正是DevOps所提倡的。

  • 环境之间的差异:部署时间差异、人员分工差异、工具差异
  • 12-Factor应用想要做到持续部署就必须缩小本地与线上差异
  • 12-Factor 应用的开发人员应该反对在不同环境间使用不同的后端服务
  • 使用类似 Chef 和 Puppet 的声明式配置工具,结合像 Vagrant 这样轻量的虚拟环境就可以使得开发人员的本地环境与线上环境无限接近。

原则11:把日志当作事件流

  • 日志使得应用程序运行的动作变得透明。在基于服务器的环境中,日志通常被写在硬盘的一个文件里,但这只是一种输出格式。
  • 日志应该是 事件流 的汇总,将所有运行中进程和后端服务的输出流按照时间顺序收集起来。尽管在回溯问题时可能需要看很多行,日志最原始的格式确实是一个事件一行。日志没有确定开始和结束,但随着应用在运行会持续的增加。
  • 12-factor应用本身从不考虑存储自己的输出流,不应该试图去写或者管理日志文件。
  • 在预发布或线上部署中,每个进程的输出流由运行环境截获,并将其他输出流整理在一起,然后一并发送给一个或多个最终的处理程序,用于查看或是长期存档。这些存档路径对于应用来说不可见也不可配置,而是完全交给程序的运行环境管理。类似 Logplex 和 Fluentd 的开源工具可以达到这个目的。
  • 这些事件流可以输出至文件,或者在终端实时观察。最重要的,输出流可以发送到 Splunk 这样的日志索引及分析系统,或 Hadoop/Hive 这样的通用数据存储系统。这些系统为查看应用的历史活动提供了强大而灵活的功能,包括:
    • 找出过去一段时间特殊的事件。
    • 图形化一个大规模的趋势,比如每分钟的请求量。
    • According to the conditions of real-time user-defined alarm is triggered, for example, more than one error per minute cordon.

Principle 12: Manage tasks run as a one-time process

  • Configuration process (process formation) refers to regular traffic to processing applications (such as web request process) of a set of processes. In contrast, developers often want to perform some maintenance or one-time task management applications, such as:
    • Data migration operation (in the Django manage.py migrate, rake db Rails in: migrate).
    • Run a console (also known as REPL shell), to execute some code or do some online checking against a database. Most languages ​​provide a REPL tool (python or perl), or other commands (Ruby using irb, Rails using rails console) via an interpreter.
    • Submit a one-time script to run some code repository.
  • 12-factor favored, especially those that provide a REPL shell language, because it makes it easy to run a one-time script.

to sum up

Official Address: https://12factor.net/zh_cn/

Speak up and see a pdf okay, go download fee of $ 25, fuck, downloaded to you: link: https: //pan.baidu.com/s/1EZJJrgkvlpU1_d1xbVaPWg extraction code: 5gfp 

Auxiliary understand: https://www.jianshu.com/p/bbdccd020a1d

 

Guess you like

Origin www.cnblogs.com/knowledgesea/p/11390017.html