Docker official document translation 3

Section 3: Services

Ready to work

  • Install Docker 1.13 and above.
  • Install Docker Compose
  • Read part 1 and part 2.
  • Make sure you have published the friendlyhello image to the docker public repository.
  • Make sure your image works as a deployable container. Run this command, inserting username, repo and tag in your info: docker run -p 80:80 username/repo:tag, then visit http://localhost/.

introduce

In part 3, we scaled our application and implemented load balancing. To do this, we have to go one level up in the hierarchy of the distributed application: the service.

heap
service (you are here)
container (covered in part 2)

About the service

In a distributed application, different parts of the application are called "services". For example, if you imagine a video sharing site, it might contain a service for storing application data in a database, a service for doing video transcoding in the background, user uploads, a service on the front end, and so on.

Services are really just "containers in production". A service runs only one image, but it encodes how the image runs - which port should be used, how many replicas the container should run so that the service has the required capacity, and so on. Scaling a service changes the number of container instances running the software, allocating more computing resources to the service in the process.

Fortunately, using the Docker platform to define, run and scale services is easy - just write a docker-compose.yml file.

your first docker-compose.yml file

The docker-compose.yml file is a YAML formatted file that defines how Docker containers behave in production.

docker-compose.yml

Save this file as docker-compose.yml whenever you want to use it. Make sure you have pushed the image created in Part 2 to the registry and update this .yml by replacing the username/repo: tag with your image.

version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

This docker-compose.yml file tells Docker to do the following:

  • Pull the image we uploaded in the second part from the registry.
  • Run 5 instances of this image as a service named web, limiting each instance to use up to 10% CPU (all cores) and 50MB of RAM.
  • If one fails, restart the container immediately.
  • Map port 80 on the host to port 80 for the web.
  • Instructs the web container to share port 80 over a load-balanced network called webnet. (Internally, the container itself publishes to web's port 80 on an ephemeral port).
  • Define the webnet network with default settings (which is a load balanced overlay network).

Run your load balancer application

Before we can use the docker stack deploy command, we first run:

docker swarm init

Now let's run it. You need to give your app a name. Here, named getstartedlab:

docker stack deploy -c docker-compose.yml getstartedlab

Our single serving stack runs 5 container instances of our deployed image on a single host.

Get the service ID of a service in our application:

docker service ls

Look for the output of the web service and prefix it with your application name. If you named it the same as shown in this example, the name would be getstartedlab_web. The service ID is also listed along with the number of replicas, image name and port exposure.

A single container running in a service is called a task. Tasks get unique IDs that increase numerically, up to the number of replicas you define in docker-compose.yml. List your service's tasks:

docker service ps getstartedlab_web

If you just list all the containers in the system, but also don't show the tasks filtered by the service, the tasks will also show up:

docker container ls -q

You can run curl -4 http://localhost multiple times in a row, or visit the URL on your browser and refresh a few times.

image

Either way, the container ID changes, indicating load balancing; on each request, one of 5 tasks is selected to respond in a round-robin strategy. The container ID matches the output of the previous command (docker container ls -q).

Extend your app

You can scale the application by changing the value for the number of replicas in docker-compose.yml, saving the changes and rerunning the docker stack deploy command:

docker stack deploy -c docker-compose.yml getstartedlab

Docker performs an in-place update without tearing down the stack or killing any containers first.

Now rerun docker container ls -q to see the reconfigured deployed instance. If you scale up the number of replicas, more tasks will be launched, and thus more containers will be launched.

Shut down the application and swarm

  • Close the application with the docker stack rm command:
docker stack rm getstartedlab
  • close swarm
docker swarm leave --force

Upgrading and scaling an application is just as easy with Docker. You've taken a big step toward learning how to run containers in production. Next, you'll learn how to run this application as a real swarm on a cluster of Docker machines.

review

All in all, it is very simple to enter docker run, the real implementation of the container in the production environment is to run it as a service. Services write the container's behavior in a Compose file, which can be used for container scaling, throttling, and redeploying our application. Changes to the service can be applied at runtime, using the same command that starts the service: docker stack deploy.

Some commands to explore at this stage are as follows:

docker stack ls                                            # List stacks or apps
docker stack deploy -c <composefile> <appname>  # Run the specified Compose file
docker service ls                 # List running services associated with an app
docker service ps <service>                  # List tasks associated with an app
docker inspect <task or container>                   # Inspect task or container
docker container ls -q                                      # List container IDs
docker stack rm <appname>                             # Tear down an application
docker swarm leave --force      # Take down a single node swarm from the manager

follow me:

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325209439&siteId=291194637