Microservices in Practice (6): Choosing a Microservice Deployment Strategy

[Editor's Note] This blog is the sixth article on building applications with microservices. The first article introduces the microservice architecture template and discusses the advantages and disadvantages of using microservices. Subsequent articles discuss different aspects of microservices: using API gateways, inter-process communication, service discovery, and event-driven data management. In this post, we will discuss strategies for deploying microservices.

 

Articles in this series:

  • Microservices in Practice (1): Advantages and Disadvantages of Microservice Architecture
  • Microservices in Practice (2): Using API Gateway
  • Microservice combat (3): In-depth inter-process communication of microservice architecture
  • Microservices in Practice (4): Feasible Solutions and Practical Cases for Service Discovery
  • Microservice Practice (5): Event-Driven Data Management of Microservices

motivation

 

Deploying a monolithic application means running multiple copies of a large application, typically providing several (N) servers (physical or virtual), running several (M) instances of the application. Deploying a monolithic application is not straightforward, but it is certainly simpler than deploying a microservice application.

 

A microservice application consists of hundreds of services, which can be written in different languages ​​and frameworks. Each service is a single application that can have its own deployment, resources, scaling, and monitoring needs. For example, several instances of a service can be run based on service requirements, and in addition, each instance must have its own CPU, memory, and I/O resources. Although complex, the challenge is that service deployment must be fast, reliable, and cost-effective.

 

There are some patterns of microservice deployment, let’s discuss the pattern of multiple service instances per host first.

 

Single host multi-service instance mode

 

One way to deploy microservices is the single-host multi-service instance mode. Using this mode, you need to provide several physical or virtual machines, and each machine runs multiple service instances. In many cases, this is the traditional approach to application deployment. Each service instance runs on the well-known port of one or more hosts, which can be regarded as pets.

 

The following diagram shows this architecture:

1.png

 

This mode has a number of parameters, one parameter representing how many processes each service instance consists of. For example, a Java service instance needs to be deployed as a web application on Apache Tomcat Server. A Node.js service instance may consist of a parent process and several child processes.

 

Another parameter defines how many instances of the service are running within the same process group. For example, you can run multiple Java web applications on the same Apache Tomcat Server, or run multiple OSGI bundled instances within the same OSGI container.

 

The single-host multi-service instance mode also has advantages and disadvantages. The main advantage is the efficiency of resource utilization. Multiple service instances share the server and operating system. It will be more efficient if the process group runs multiple service instances. For example, multiple web applications share the same Apache Tomcat Server and JVM.

 

Another advantage is that deploying service instances is fast. Just copy the service to the host and start it. If the service is written in Java, just copy the JAR or WAR file. For other languages, such as Node.js or Ruby, the source code needs to be copied. That is to say, the network load is very low.

 

Since there is not much load, starting the service is fast. If the service is a self-contained process, it only needs to be started; otherwise, if it is a service instance running in the container process group, it needs to be dynamically deployed into the container, or the container needs to be restarted.

 

In addition to the above advantages, single-host multi-service instances also have disadvantages. One of the main drawbacks is that there is little or no isolation between service instances, unless each service instance is an independent process. If you want to accurately monitor the resource usage of each service instance, you cannot limit the resource usage of each instance. So it is possible to have a bad service instance taking up all the memory or CPU of the host.

 

There is no isolation of multiple service instances within the same process. All instances may, for example, share the same JVM heap. A bad service instance can easily attack other services in the same process; even more, it may not be possible to monitor the resources used by each service instance.

 

Another serious problem is that the operations team must know the detailed steps on how to deploy. Services can be written in different languages ​​and frameworks, so the development team must have a lot to communicate with the operations team. The complexity increases the likelihood of errors during deployment.

 

As you can see, despite being familiar, a single-host multi-service instance has many serious flaws. Let's see if there are other ways to deploy microservices that avoid these problems.

 

Single host single service instance mode

 

Another way to deploy microservices is the single-host single-instance model. When using this mode, each service instance is independent on each host. There are two different implementation modes: single virtual machine single instance and single container single instance.

 

Single virtual machine single instance mode

 

But with the single-VM single-instance model, the service is typically packaged as a virtual machine image (image), such as an Amazon EC2 AMI. Each service instance is a VM (for example, an EC2 instance) launched with this image. The following diagram shows this architecture:

2.png

 

Netflix adopts this architecture to deploy video streaming service. Netflix uses Aminator to package each service into an EC2 AMI. Each running service instance is an EC2 instance.

 

There are many tools that can be used to build your own VMs. A continuous integration (CI) service (eg, Jenkins) can be configured to avoid Aminator from packaging the service into an EC2 AMI. packer.io is another option for automated virtual machine image creation. Unlike Aminator, it supports a range of virtualization technologies such as EC2, DigitalOcean, VirtualBox and VMware.

 

Boxfuse has an innovative approach to creating virtual machine images that overcomes the following drawbacks. Boxfuse packages java applications into minimal virtual machine images. They are created quickly, start quickly, and are more secure because of less exposed service interfaces.

 

CloudNative has a SaaS application for creating EC2 AMIs, Bakery. After the user's microservice architecture passes the test, you can configure your own CI server to activate Bakery. Bakery packages the service into an AMI. Using a SaaS application like Bakery means that users don't need to waste time setting up their own AMI creation architecture.

 

The per-VM service instance model has many advantages. The main VM advantage is that each service instance runs completely independently, with its own independent CPU and memory and will not be occupied by other services.

 

Another benefit is that users can use mature cloud architectures, such as those provided by AWS, and cloud services provide useful features such as load balancing and scalability.

 

Another benefit is that the service implementation technology is self-contained. Once a service is packaged into a VM it becomes a black box. The VM's management API becomes the deployment service's API, and deployment becomes a very simple and reliable thing.

 

The single-VM single-instance model also has disadvantages. One disadvantage is the inefficient use of resources. Each service instance comrades the resources of the entire virtual machine, including the operating system. Moreover, in a typical public IaaS environment, virtual machine resources are standardized and may be underutilized.

 

Moreover, public IaaS charges according to the VM, regardless of whether the virtual machine is busy; for example, AWS provides an automatic scaling function, but the lack of rapid response to on-demand applications makes users have to deploy more virtual machines, thereby increasing the deployment cost.

 

Another disadvantage is that deploying new versions of services is slow. The virtual machine image is slow to create because of its size. For the same reason, the initialization of the virtual machine is also slow, and it takes time to start the operating system. But this is not always the case, some lightweight virtual machines, such as those created with Boxfuse, are faster.

 

The third disadvantage is that for operations teams, they are responsible for a lot of customization. Unless a tool like Boxfuse can help offload a lot of the work of creating and managing virtual machines; it can take up a lot of time doing work that isn't too unrelated to the core business.

 

So let's take a look at another microservice deployment method that still has the characteristics of virtual machines, but is relatively lightweight.

 

Single container single service instance mode

 

When using this pattern, each service instance runs in its own container. Containers are virtualization mechanisms that run at the operating system level. A container contains several processes running in a sandbox. From a process perspective, they have their own namespace and root filesystem; the container's memory and CPU resources can be limited. Some containers also have I/O limitations, such container technologies include Docker and Solaris Zones.

 

The following figure shows this pattern:

3.png

 

​Using this mode requires packaging the service as a container image. A container image is a running filesystem that contains the libraries and applications needed by the service. Some container images consist of a full linux root filesystem, others are lightweight. For example, in order to deploy a Java service, a container image needs to be created that contains the Java runtime library, and perhaps the Apache Tomcat server, as well as the compiled Java application.

 

Once the service is packaged into a container image, several containers need to be started. Generally, multiple containers are run on a physical machine or virtual machine, and a cluster management system, such as k8s or Marathon, may be required to manage the containers. The cluster management system uses the host as a resource pool, and decides to schedule the container to which host according to the resource requirements of each container.

 

The single-container single-service instance model also has advantages and disadvantages. The advantages of containers are similar to virtual machines. Service instances are completely independent, and it is easy to monitor the resources consumed by each container. Similar to virtual machines, containers use isolation techniques to deploy services. The container management API can also be used as an API for managing services.

 

However, unlike virtual machines, containers are a lightweight technology. Container images are quick to create, for example, packaging a Spring Boot app into a container image takes just 5 seconds on a laptop. Container startup is also fast because no OS startup mechanism is required. When the container starts, the background service starts.

 

There are also some disadvantages to using containers. Although the container architecture is developing rapidly, it is still not as mature as the virtual machine architecture. And because the host OS kernel is shared between containers, it is not as secure as a virtual machine.

 

In addition, container technology will require many customizations to manage container images. Unless using such as Google Container Engine or Amazon EC2 Container Service (ECS), users will need to manage both container architecture and virtual machine architecture.

 

Third, containers are often deployed on an architecture that charges per virtual machine, and it is clear that customers will also increase deployment costs to cope with the increase in load.

 

Interestingly, the distinction between containers and virtual machines is increasingly blurred. As mentioned earlier, Boxfuse virtual machines are quick to start and create, and the Clear Container technology is oriented towards creating lightweight virtual machines. The technology of unikernel has also attracted attention, and Docker recently acquired Unikernel.

 

In addition to these, server-less deployment technologies, which avoid the aforementioned drawbacks of container and VM technologies, are attracting more and more attention. Let's take a look.

 

Serverless deployment

 

AWS Lambda is an example of a serverless deployment technology that supports Java, Node.js, and Python services; the service needs to be packaged as a ZIP file and uploaded to AWS Lambda for deployment. Metadata can be provided, providing the name of the function that handles the service request (an event). AWS Lambda automatically runs enough microservices to handle requests, but is only billed based on runtime and memory consumption. Of course the details make or break, and AWS Lambda has limitations. But you don't need to worry about servers, virtual machines, or anything inside the container that's absolutely fascinating.

 

Lambda functions are stateless services. The request is generally handled by activating the AWS service. For example, when the image is uploaded to the S3 bucket to activate the Lambda function, an entry can be inserted into the DynamoDB image table, publishing a message to the Kinesis stream, triggering the image processing action. Lambda functions can also be activated via third-party web services.

 

There are four ways to activate a Lambda function:

  • Direct way, using web service request
  • Automatically respond to events such as AWS S3, DynamoDB, Kinesis or Simple Email Service
  • Automatically process HTTP requests from application clients through AWS API Gateway​
  • Timing method, responding through cron​--much like timer method

 

As you can see, AWS Lambda is a very convenient way to deploy microservices. The request-based billing method means that users only need to bear the load of processing their own business; in addition, users only need to develop their own applications because they do not need to understand the infrastructure.

 

However, there are still many limitations. Does not need to be used to deploy long-term services, such as to consume messages forwarded from third-party proxies, requests must be completed within 300 seconds, services must be stateless, because in theory AWS Lambda will generate a separate instance for each request ; must be done in one of the supported languages, and the service must start quickly, otherwise, it will be stopped due to a timeout.

 

Summarize

 

Deploying microservice applications is also a challenge. There are hundreds of services written in various languages ​​and frameworks. Each service is a mini-application with its own unique deployment, resource, scaling, and monitoring needs. There are several microservice deployment models, including single-VM single-instance and single-container single-instance. Another optional mode is AWS Lambda, a serverless approach.

 

In the next and final blog in this series, we will discuss how to migrate a monolithic application to a microservices architecture.

 

This article is reproduced from: http://dockone.io/article/1066

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326375606&siteId=291194637