[AWS Call for Papers] ECS makes container orchestration easier

Amazon Elastic Container Service (Amazon ECS) is Amazon's proprietary, highly scalable and high-performance container management service that can easily run, stop, and manage Docker containers on the cluster.

With ECS, you no longer need to install, operate, and expand your own cluster management infrastructure. Only a simple API call can start and stop Docker applications, query cluster status, etc. The location of containers in the cluster can be arranged according to resource requirements, isolation strategies, and availability requirements. ECS integrates Amazon Elastic Container Registry, Elastic Load Balancing, Elastic Block Store, Elastic Network Interfaces, Virtual Private Cloud, IAM and CloudTrail to provide a complete solution for running various containerized applications or services.

ECS is a proven solution that has been applied to many other AWS services, such as Amazon SageMaker and Amazon Lex. This is enough to prove the safety, reliability, and availability of ECS, which can be used in production environments.

Startup type

ECS supports two startup types: Fargate and EC2.

Fargate startup type

The Fargate startup type does not need to configure and manage the underlying infrastructure. You only need to define tasks (equivalent to Kubernetes Pods), specify CPU, memory, network, IAM policies, etc., to run containerized applications.
[AWS Call for Papers] ECS makes container orchestration easier

EC2 startup type

The EC2 launch type allows containerized applications to run on a cluster of EC2 instances managed by itself.
[AWS Call for Papers] ECS makes container orchestration easier

You can use the Fargate startup type to start services or tasks and run on the serverless infrastructure managed by ECS without having to manage Amazon EC2 instance servers or clusters. For more control, you can use the EC2 startup type.

The following explains the use of the Fargate startup type to deploy Angular 9 integrated Spring Boot 2 Spring Boot and Angular applications.

Create Docker Image

Dockerfile file of Spring Boot project:

Dockerfile.spring

FROM openjdk:8-jdk-slim

WORKDIR app
ARG APPJAR=target/heroes-api-1.0.0.jar
COPY ${APPJAR} app.jar

ENTRYPOINT ["java","-jar","app.jar"]

Dockerfile for Angular project:

Dockerfile.angular

FROM httpd:2.4

ARG DISTPATH=./dist/
ARG CONFFILE=./heroes-httpd.conf
COPY ${DISTPATH} /usr/local/apache2/htdocs/
COPY ${CONFFILE} /usr/local/apache2/conf/httpd.conf

Take deployment to Apache as an example, run the following command to get httpd.conf:

docker run --rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf > heroes-httpd.conf

Modify the configuration file, enable proxy_module, proxy_http_module, rewrite_module, and then add the following content:

ProxyPreserveHost on
ProxyPass "/api" "http://127.0.0.1:8080/api"
ProxyPa***everse "/api" "http://127.0.0.1:8080/api"

RewriteEngine  on
RewriteRule ^/$ /en/index.html

# If an existing asset or directory is requested go to it as it is
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d
RewriteRule ^ - [L]

# If the requested resource doesn't exist, use index.html
RewriteRule ^/zh /zh/index.html
RewriteRule ^/en /en/index.html

We will include these two images in a task, using the Apache proxy backend address.

Build Image
Run the following command to build Image:

docker build --build-arg APPJAR=heroes-api-1.0.0.jar -f Dockerfile.spring -t heroes-api .
docker build -f Dockerfile.angular -t heroes-web .

Push Image to ECR

Amazon Elastic Container Registry (Amazon ECR) is a Docker Container Registry service that allows developers to easily store, manage, and deploy Docker container images.

  1. Create ECR repository

First create two ECR repositories, which can be created from the console, or you can run the following AWS CLI commands:

aws ecr create-repository --repository-name heroes/heroes-api
aws ecr create-repository --repository-name heroes/heroes-web
  1. Log in to the ECR repository

Execute the following commands:

`aws ecr get-login --no-include-email`

或

aws ecr get-login-password | docker login --username AWS --password-stdin 888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn
  1. Mark image
docker tag heroes-api:latest 888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-api:latest
docker tag heroes-web:latest 888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-web:latest
  1. Push Image
docker push 888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-api:latest
docker push 888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-web:latest

Create a cluster

An Amazon ECS cluster is a logical grouping of tasks or services. Multiple clusters can be created in one account to keep resources independent.

Creating a Fargate cluster is very simple. Enter the ECS cluster console, click "Create Cluster", select the "Network Only" cluster template, click "Next", then enter the cluster name "heroes", and click "Create".

Create task definition

Task definition is similar to the blueprint of an application, and task definition is required to run Docker containers in Amazon ECS.

Some parameters that can be specified in the Fargate task definition:

  • Docker Image for the task
  • The number of CPU and memory for the task
  • Task logging configuration
  • IAM role used by the task
  • The data volume used for the container in the task
  • The command that the container should run when it starts

Perform the following steps to create a task definition:

  1. Enter the ECS task definition console, click Create task definition
  2. Select FARGATE startup type, click Next
  3. In the configuration task and container definition page, enter the task definition name (heroes), select the task role, and configure the task memory and CPU

Effective combination of memory and CPU

CPU value Memory value (MiB)
256 (.25 vCPU) 512 MB, 1 GB, 2 GB
512 (.5 vCPU) 1 GB、2 GB、3 GB、4 GB
1024 (1 vCPU) 2 GB、3 GB、4 GB、5 GB、6 GB、7 GB、8 GB
2048 (2 vCPU) 4GB to 16GB (in 1GB increments)
4096 (4 vCPU) 8GB to 30GB (in 1GB increments)

Container storage and shared volume
Fargate startup type, the maximum container storage of the task is 10GB, and the maximum size of the shared volume used by multiple containers is 4GB. To add a shared volume, click "Add Volume" in the Volume Configuration section, enter the volume name and click "Add". Note that task storage is short-term storage. After the Fargate task is stopped, the storage will be deleted.

Add a container
Next, click the "Add container" button in the container definition section to add heroes-api and heroes-web containers:
[AWS Call for Papers] ECS makes container orchestration easier
mount points and configuration logs can be added in the "Storage and Logging" section of the lower part of the advanced container configuration:
[AWS Call for Papers] ECS makes container orchestration easier
default, Fargate tasks record logs to CloudWatch Log, and you can set the log group name and log stream prefix of each container.

Finally, click "Create" to complete the task definition.

Create Service

The service can simultaneously run and manage a specified number of task definition instances in an Amazon ECS cluster. If the task fails or stops for any reason, the service scheduler will start another task definition instance to replace it and retain the expected number of tasks in the service according to the planning strategy used.

In addition to keeping the expected number of tasks in the service, you can also choose to run the service with a load balancer. The load balancer will distribute traffic among the various tasks associated with the service.

In the ECS console, services can be created from clusters and task definitions. Taking task definition as an example, the steps are as follows:

Configure basic parameters
Select the hero task definition, then click Action -> Create Service, and fill in the following parameters on the Configure Service page:

  • Startup type: FARGATE
  • Cluster: heroes
  • Service name: heroes
  • Number of tasks: 1
  • Deployment type: rolling update

Click Next.

The network configuration is
in the VPC and security group section, select the cluster VPC and subnet. A security group name is automatically generated by default. You can click Edit to modify the security group name, or select an existing security group. If you choose to create a new security group, port 80 inbound rules are configured by default.
We use ELB and do not need public IP. Set the option of automatically assigning public IP to DISABLED.

For load balancing,
we create a new ALB without creating a target group.

  • Load balancer type: Application Load Balancer
  • Load balancer name: select the newly created ALB
  • Choose a container: heroes-web:80:80, then click "Add to Load Balancer" and enter the following parameters:
    Production listener port: New 80
    Production listener protocol: HTTP
    target group name: New ecs-heroes
    target Group protocol: HTTP

Set up Auto Scaling
service Auto Scaling: Do not adjust the expected count of the service

After reviewing and
checking, click Create Service.

Return to the cluster interface, you can see that the heroes cluster contains a service and a running task.

The service is limited to
the cluster task interface. If we stop the current task, the new task will be automatically restarted later, and the ELB target group will be automatically updated.

The Amazon ECS service scheduler contains logic to limit the frequency of service tasks restarting after repeated failures.

If an ECS service task always fails to enter the RUNNING state (jumps directly from PENDING to STOPPED), the interval between subsequent restart attempts will gradually increase, up to 15 minutes

To access the service,
go to the heroes cluster -> task, click the task, and enter the task details page, you can see the private IP address of the task. In the internal network, you can use the private IP to access the applications deployed by each task. The public network accesses services through ELB.

Log configuration

awslogs

By default, Fargate tasks use the awslogs log driver to record logs to CloudWatch Log.

awslogs log driver options

Options have to Description
awslogs-create-group no To automatically create a log group, the IAM policy must include the logs:CreateLogGroup permission. The default value is false
awslogs-region Yes The area where the log driver should send Docker logs
awslogs-group Yes The log group to which the log driver sends the log stream
awslogs-stream-prefix Yes Log stream prefix, log stream format: prefix-name/container-name/ecs-task-id
awslogs-datetime-format no Define multi-line start position mode in Python strftime format
awslogs-multiline-pattern no Use regular expressions to define multi-line start position patterns

Note: If both awslogs-datetime-format and awslogs-multiline-pattern are configured at the same time, the awslogs-datetime-format option takes precedence. Multi-line logging performs regular expression parsing and matching on all log messages, which may have a negative impact on logging performance.

FireLens integration

FireLens for Amazon ECS can route logs to AWS services or AWS Partner Network (APN) destinations for log storage and analysis. FireLens can be used in combination with Fluentd and Fluent Bit. AWS provides Fluent Bit images and plugins for CloudWatch Logs and Kinesis Data Firehose. You can also use your own Fluentd or Fluent Bit images.

Fluent Bit and Fluentd are two log collection tools. The Fluent Bit plug-in saves resources and is more efficient. It is recommended to use Fluent Bit as a log router.

Select the task we created earlier and click "Create New Revision" to enable FireLens integration:

[AWS Call for Papers] ECS makes container orchestration easier

As shown in the figure above, after selecting fluentbit, the image address will be filled in automatically, and the log_router container will be automatically added by clicking Apply .

Note that to use the CloudWatch plugin, the task role must have CreateLogGroup, CreateLogStream, DescribeLogStreams, PutLogEvents permissions; to use the AmazonKinesisFirehose plugin, you must have the "firehose:PutRecordBatch" permission.

Configure log_router

{
    "essential": true,
    "image": "128054284489.dkr.ecr.cn-north-1.amazonaws.com.cn/aws-for-fluent-bit:latest",
    "name": "log_router",
    "firelensConfiguration": {
        "type": "fluentbit"
    },
    "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
            "awslogs-group": "/ecs/firelens",
            "awslogs-region": "cn-north-1",
            "awslogs-create-group": "true",
            "awslogs-stream-prefix": "firelens"
        }
    },
    "memoryReservation": 50
}

Log Router logs are driven by awslogs and recorded to CloudWatch:
[AWS Call for Papers] ECS makes container orchestration easier

Below we use Fluent Bit to forward logs to CloudWatch Logs and Kinesis Data Firehose.

Forward logs to CloudWatch Logs

Modify the log configuration of the container heroes-api. The driver selects "awsfirelens", the log option "Name" is set to "cloudwatch", and the CloudWatch Logs plugin is enabled. Other configurations are as follows:

{
    "essential": true,
    "image": "888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-api:latest",
    "name": "heroes-api",
    "logConfiguration": {
       "logDriver": "awsfirelens",
       "options": {
         "Name": "cloudwatch",
         "region": "cn-north-1",
         "log_group_name": "/ecs/heroes-api",
         "log_stream_prefix": "from-fluent-bit",
         "auto_create_group": "true"
       }
    }
}

[AWS Call for Papers] ECS makes container orchestration easier

After the task is modified, click "Create" to save the task definition.

Return to the cluster console, select the service we created, click "Update", then select the latest task definition, select "Enforce a new deployment", and save the service step by step.

After the service update is successful, a new task will be created and enter the Cloud Watch console to view the log. The log format is as follows:

{
    "container_id": "cda4f603d8e485d48fd7e1a77b3737026221c165b8f6d582ed78bd947a12b911",
    "container_name": "/ecs-heroes-2-heroes-api-c2f3d4a8bdbcf3f9e601",
    "ecs_cluster": "arn:aws-cn:ecs:cn-north-1:888888888888:cluster/isd",
    "ecs_task_arn": "arn:aws-cn:ecs:cn-north-1:888888888888:task/078dd364-28b6-4650-b294-5eac6b39d08f",
    "ecs_task_definition": "heroes:2",
    "log": " /\\\\ / ___'_ __ _ _(_)_ __  __ _ \\ \\ \\ \\",
    "source": "stdout"
}

As you can see, new identification fields have been added to the log: container_id, container_name, ecs_cluster, ecs_task_arn, ecs_task_definition, and source.

During task definition, click "Configure via JSON" and set the fluentbit option enable-ecs-log-metadata to false to disable the above three ecs metadata:

"firelensConfiguration": {
    "type": "fluentbit",
    "options": {
        "enable-ecs-log-metadata": "false"
    }
}

In the log configuration of the container, the log_key option is added to forward log items only:

"log_key": "log"

Options supported by the CloudWatch Logs plugin:

  • region: AWS region
  • log_group_name: CloudWatch log group name
  • log_stream_name: CloudWatch log stream name
  • log_stream_prefix: The prefix of the log stream name, incompatible with the log_stream_name option
  • log_key: By default, the entire log record is sent to CloudWatch. If the key name is specified, only the specified items are sent to CloudWatch
  • log_format: log format
  • auto_create_group: automatically create a log group, the default value is false

Forward logs to Amazon Kinesis Data Firehose

Options supported by Kinesis Data Firehose plug-in:

  • region: AWS region
  • delivery_stream: The name of the Kinesis Data Firehose delivery stream
  • data_keys: By default, the entire log record is sent to Kinesis. If key name(s) is specified, only the specified items are sent to Kinesis. Multiple names are separated by commas.

The Kinesis Data Firehose delivery stream can send logs to Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Taking S3 as an example, we first create the transport stream heroes-web-log to S3, and then modify the log configuration of the container heroes-web. The driver selects "awsfirelens", the log option "Name" is set to "firehose", and Kinesis is enabled. Data Firehose plug-in, other configurations are as follows:

{
    "essential": true,
    "image": "888888888888.dkr.ecr.cn-north-1.amazonaws.com.cn/heroes/heroes-web:latest",
    "name": "heroes-web",    
    "logConfiguration": {
      "logDriver":"awsfirelens",
      "options": {
        "Name": "firehose",
        "region": "cn-north-1",
        "delivery_stream": "heroes-web-log",
        "data_keys": "log"
      }
    },
    "memoryReservation": 100        
}

[AWS Call for Papers] ECS makes container orchestration easier

After saving the task definition and updating the service, you can go to S3 to view the log.

ECS vs EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that runs open source Kubernetes to deploy, manage, and scale containerized applications. EKS does not need to use the Kubernetes control plane, eliminating the major operational burden of running Kubernetes, allowing it to focus on building applications instead of managing AWS infrastructure. EKS can run across multiple AWS availability zones, is compatible with Kubernetes, can use all existing plug-ins and tools provided by the Kubernetes community, and all applications running in the standard Kubernetes environment can be easily migrated to EKS.

Universal
ECS is AWS proprietary technology, while EKS runs open source Kubernetes. Therefore, if your deployment environment is not limited to AWS, for example, you may deploy to Google GKE (Google Kubernetes Engine), Microsoft AKS (Azure Kubernetes Service) or standard Kubernetes, you should choose EKS.

Simplicity
ECS is an out-of-the-box solution that can be easily deployed through the AWS console. EKS is a bit complicated, requires more configuration, and requires more expertise.

Price
ECS does not charge any additional fees, and only pays for the computing resources used by the container. EKS clusters need to pay 0.688 CNY per hour (you can use the Kubernetes namespace and IAM security policies to use an EKS cluster to run multiple applications), and the AWS resources used to run Kubernetes worker nodes need to be paid according to the actual usage.

After testing and comparing the three container orchestration solutions of ECS, EKS, and Kubernetes, I chose ECS in my work. ECS fully meets the needs of the project, making deployment easier, operation and maintenance easier, and cost easier.

Reference documents

Amazon Elastic Container Service 文档

Guess you like

Origin blog.51cto.com/7308310/2536584