Getting started, Part 3: Service
prerequisites
-
Docker install version 1.13 or later.
-
Get Docker Compose. On Docker for Mac desktops and Windows, it has been pre-installed, so you can be good-to-go. On Linux systems, you need to install it directly. In the previous Windows 10 system, there is no Hyper-V, please use the Docker Toolbox.
-
Reading direction Part 1.
-
Learn how to create a container in Part 2.
-
By ensure
friendlyhello
to publish your image to create a mirror image pushed to the registry. Here we use a shared image. -
Ensure that the container has been deployed as a mirror. Run this command, type in the information
username
,repo
andtag
:docker run -p 4000:80 username/repo:tag
then accesshttp://localhost:4000/
Introduction
In Part 3, we extend the application and load balancing. To this end, we must elevate one level in the hierarchy of distributed application: service.
-
Stack
-
Services (you are here)
-
Container (Part 2 covers)
About Service
In distributed applications, different parts of an application called "service." For example, if you imagine a video sharing site, it probably includes a service in the database to store application data, user content after uploading video transcoding service, front-end services in the background.
Service is really just "a production environment of the container." Services run only a mirror, but it mirrored the way of coding the run - it should use what port, container how many copies should be running in order to service with the required capacity, and so on. Zoom will change the service to run container number of instances of the software, which will allocate more resources to the process of computing services.
Fortunately, the use of Docker platform definition, operation and extended service is very simple - just write a docker-compose.yml
file.
Your first docker-compose.yml
file
docker-compose.yml
YAML file is a file that defines how to operate Docker containers in production.
docker-compose.yml
Save this file as docker-compose.yml
, whether you need. Make sure that the mirror portion is created in the second push registry, and by username/repo:tag
updating a mirror image of this alternative details .yml
.
version: "3" services: web: # replace username/repo:tag with your name and image details image: username/repo:tag deploy: replicas: 5 resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "4000:80" networks: - webnet networks: webnet:
This docker-compose.yml
file tells Docker do the following:
-
We extract images uploaded in step 2 from the registry.
-
The five instances referred to as mirroring
web
service running limit each time instance up to 10% of a single CPU (which may be, for example, "1.5" indicates that each of the memory core 1 and a half), and 50MB of RAM limit . -
If the container fails, immediately restart the container.
-
The port on the host 4000 is mapped to the Web port 80.
-
Indicated
web
vessel referred to bywebnet
load balancing network shared port 80. (Internally, the container itself is posted to the Web port 80 in a temporary port. -
Using the default settings (overlay network load balancing) defined
webnet
network.
The new load-balancing applications running
In use docker stack deploy
before the command, we first run:
docker swarm init
Note: We enter the meaning of section 4 of the command. If you do not run
docker swarm init
, you will receive an error that "this node is not a swarm manager .".
Now let's run it. You need to specify a name for the application. Here, it is called getstartedlab
:
docker stack deploy -c docker-compose.yml getstartedlab
Our services stack run single to deploy five container instance mirrors on a single host. Let's look into it.
Get our application service ID of a service:
docker service ls
Find web
output services, and pre-application name. If you will be named the same as shown in this example, the name getinitlab_web
. Service ID and the number of copies, image name, and open ports are also listed.
Alternatively, you can run docker stack services
, then the name of the stack. The following example command allows you to view and getstartedlab
all associated services stack:
docker stack services getstartedlab ID NAME MODE REPLICAS IMAGE PORTS bqpve1djnk0x getstartedlab_web replicated 5/5 username/repo:tag *:4000->80/tcp
Running in a single container service called 任务
. The task is given a unique identification number can add up to way to get up to you docker-compose.yml
to copy defined replicas
number. Lists the tasks and services:
docker service ps getstartedlab_web
If you just list all the container on the system, the task is also displayed, although screening service fails:
docker container ls -q
You can continuously run curl -4 http://localhost:4000
a few times, or go to the URL in your browser and click refresh a few times.
Either way, the container ID are changed, the load balancing presentation; each request, in a cyclic manner to select one of five tasks in response. Container ID are the above command ( docker container ls -q
output match).
To see all the task stack, you can run docker stack ps
followed by application name, as the following example:
docker stack ps getstartedlab ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS uwiaw67sc0eh getstartedlab_web.1 username/repo:tag docker-desktop Running Running 9 minutes ago sk50xbhmcae7 getstartedlab_web.2 username/repo:tag docker-desktop Running Running 9 minutes ago c4uuw5i6h02j getstartedlab_web.3 username/repo:tag docker-desktop Running Running 9 minutes ago 0dyb70ixu25s getstartedlab_web.4 username/repo:tag docker-desktop Running Running 9 minutes ago aocrb88ap8b0 getstartedlab_web.5 username/repo:tag docker-desktop Running Running 9 minutes ago
Running Windows 10?
Windows 10 PowerShell should already have curly available, but if not, you can grab a Linux terminal emulator, such as Git BASH, or download wget for Windows that is very similar.
Slow response times?
The network configuration environment, the container may take up to 30 seconds to respond to HTTP requests. It is not indicative of the performance Docker or group, but discussed later in this tutorial unmet Redis dependency. Currently, visitors counter do not work for the same reason; we have not been added to retain data services.
Scaling Application
You can change docker-compose.yml
the replicas
values, save the changes and re-run the docker stack deploy
command to scale the application:
docker stack deploy -c docker-compose.yml getstartedlab
Docker-place update, without first stack torn or terminate any container.
Now, re-run docker container ls -q
to see the deployed instance reconfiguration. If the enlarged replicas, will start more tasks to start more container.
Under the application and swarm out
Use docker stack rm
the application under out:
docker stack rm getstartedlab
Next fall swarm
:
docker swarm leave --force
Docker establish and expand the use of the application is very simple. You learn how to run container in a production environment a big step. Next, you will learn how to run this application to form a true group on Docker computer cluster.
Note: Compose file like this using Docker custom applications, and can be used Docker Cloud uploaded to the cloud provider, you can also be uploaded to the cloud provider use any hardware or Enterprise Edition Docker selected.
Review and Cheat sheets (optional)
In a nutshell, although the type docker run
is very simple, but truly the production of container is running as a service it. Service behavior codified in the Compose container file and the file can be used to extend, limit and re-deploy our application.
At this stage some of the commands to be explored:
docker stack ls # List stacks or apps docker stack deploy -c <composefile> <appname> # Run the specified Compose file docker service ls # List running services associated with an app docker service ps <service> # List tasks associated with an app docker inspect <task or container> # Inspect task or container docker container ls -q # List container IDs docker stack rm <appname> # Tear down an application docker swarm leave --force # Take down a single node swarm from the manager