10 Myths of Using Docker Containers

For users, they may reject the container without knowing it at first, but after tasting the sweetness of the container and experiencing its powerful performance, I believe that everyone will eventually be unable to resist its charm. Container technology can solve many problems currently faced by the IT industry, and the advantages are obvious, for example:
1. Containers have immutable characteristics.
    Containers package operating systems, libraries, configuration files, paths, and applications to run together. That is to say, what the entire image looks like when we do QA testing, and what it will look like when it is put into the production environment, its performance will not any gaps.
2. The containers are very lightweight.
    The memory footprint of a single container is very small. Unlike other processes, which often occupy tens of thousands of MB of memory space, the container only allocates memory to the main process, which can effectively reduce system overhead.
3. The speed of the container is faster.
   The startup time of a virtual machine is generally in minutes, and the startup speed of a container can reach the second level. Starting a container is as fast as starting a Linux process.

 

    Although there are so many benefits of containers, many users still don't understand them and think that containers are no different from ordinary virtual machines. In fact, containers are destructible, which is the biggest difference between containers and virtual machines. The existence period of the container is very short. As long as the user is finished using it, the container can be destroyed immediately, so it is not too much to describe it as " morning and dying " .     

                                                                                                                  
When using and maintaining containers, we should make full use of this feature of containers, and stop treating containers as general virtual machines, otherwise it will be really overkill. In the actual use process, in order to maximize the advantages of the container, it is better to make less mistakes. I have summarized the following points for your reference. When running containers, it is best for you to follow these principles as much as possible:

1) Do not store data in containers.
    Containers can be stopped, destroyed or migrated at any time. For example, if the application version running in a container is 1.0, we can upgrade the application to 1.1 in minutes without any impact on the data. So if users want to store data, it is best to use data volumes to store them. However, when using volumes to store data, you should pay attention to one point. If two containers share a data volume and write data to it, it may cause the program to crash. We should take this into account when designing the application, and to be safe, the application should have specific mechanisms to ensure that there are no errors when writing data to the shared data store.
2) Don't deliver applications in chunks.
    In the eyes of some users, containers are no different from virtual machines, so some people tend to deploy applications into several currently running containers. This approach is not a big problem in the development stage, because we will deploy and debug frequently during development, but in the continuous delivery (CD) stage, the next step is QA testing and formal production. Not very suitable. At this stage, we should fully consider the immutable nature of the container, preferably package the application into an image for delivery.
3) Don't make the mirror volume very large.
    The larger the image, the harder it is to publish. The image only needs to contain the necessary files and library, and it is enough to make the application or process run. Don't install unnecessary things in the image. Avoid using the update command like yum when building the image, so that the system will automatically download many irrelevant files to the new image layer.
4) When building a mirror, don't build only one layer.
    As we all know, Docker's file system is layered. When building an image, we should build it in this way. Build a separate layer of the operating system as the base image, and then the user name definition file, runtime installation environment, and configuration file must be Build a layer of mirroring, and finally apply the mirroring layer. Doing this will save us a lot of effort when it comes to rebuilding, managing, and publishing images in the future.
5) Do not convert locally running containers into images.
    In other words, don't use the "docker commit" command to create an image. It is completely inadvisable to build mirrors in this way, because this method cannot be repeated. When we build an image, we should create it from a Dockerfile, or use other S2I (building images from source files) methods, so that the image is reproducible, and if we store the image in a system such as git that provides version control capabilities In the case, you can also track changes to the Dockerfile.
6) Don't just use "latest" when tagging images.
    The latest is actually equivalent to the "snapshot" in Maven. Because the container's file system is layered, we'd better tag the image a few more times. If there is only latest, we may find that the program cannot run when we run the application again after a while, because the parent layer of the application (that is, the layer following the FROM command in the Dockerfile) is overwritten by the updated version. , and the new version is not backward compatible. It is also possible that the wrong "latest" image was obtained when fetching the image from the build cache. Avoid using latest when deploying containers in a production environment, otherwise it is easy to cause the problem of not being able to track and record the image version.
7) Do not run multiple processes in a single container.
    Containers are originally used to run a single application (such as http daemon, application server, database, etc.). If we have to run several applications in a container, then we are managing each application process, accessing logs, and upgrading applications. It will be troublesome at times.
8) Do not store the authentication password in the mirror, it is better to use environment variables.
    If we store the username/password value pair in the mirror, we can only process them one by one in a hard-coded way. It is estimated that no one wants to do this kind of trouble. So we'd better use environment variables to get this kind of information from outside the container.
9) Do not run processes with the role of root user.
    Docker containers run with root privileges by default. However, as the technology matures, docker will also provide more secure default operation options. Under the existing technical conditions, running with root privileges will bring security risks to other applications, and in some operating environments, root privileges cannot be obtained, so we should use the USER command to specify non-root privileges when running containers User.
10) Don't rely too much on IP addresses.
    Each container has an internal IP. This IP is not fixed. The IP will change when we start or stop the container. If we want to let applications or microservice modules communicate between containers, the correct way is to pass the hostname and port number by setting environment variables.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326128323&siteId=291194637