Application of Docker in PHP project development environment

Abstract: Environment deployment is a problem that all teams must face. As the system becomes larger and larger, there are more and more dependent services. For example, one of our current projects will use: Web server: Nginx Web program: PHP + Node database: MySQL Search engine: ElasticSearch Queue service: Gearman Cache service: Redis + Memcache

Environment deployment is a problem that all teams must face. As the system becomes larger and larger, there are more and more dependent services, such as our A current project will use:

Web server: Nginx
Web program: PHP + Node
Database: MySQL
Search engine: ElasticSearch
Queue service: Gearman
Cache service: Redis + Memcache
Front-end build tool: npm + bower + gulp
PHP CLI tool: Composer + PHPUnit
Therefore, the development environment deployment of the team has exposed several problems:

there are many dependent services, the cost of building a local environment is getting higher and higher, and it is difficult for junior staff to solve some problems in the environment deployment. Differences in
service versions and OS differences It may cause the online environment BUG
project to introduce new services when everyone's environment needs to be reconfigured.
For problem 1, a virtual machine-based project such as Vagrant can be used to solve the problem, and team members share a set of development environment images. For problem 2, a multi-version PHP management tool like PHPBrew can be introduced to solve it. But neither can solve problem 3 very well, because virtual machine images have no concept of version management. When multiple people maintain an image, configuration omissions or conflicts are prone to occur, and it is inconvenient to transfer a large image.

The emergence of Docker has provided a better solution to the above problems. Although I am still cautious about the large-scale application of Docker to the production environment, if I only consider testing and development, I think Docker's containerization concept can really solve the problem. The silver bullet for environment deployment problems is gone.

The following describes the evolution of Docker in the process of building a PHP project development environment. In this article, it is assumed that your operating system is Linux, you have installed Docker, and you already know what Docker is and the basic use of the Docker command line. If you do not have this background knowledge, it is recommended to first Find out for yourself.


Hello World
starts with a Hello World instance of PHP in a Docker container. We prepare such a PHP file index.php:

<?php
echo "PHP in Docker";
Then create a text file in the same directory and name it Dockerfile, the content is: # Build FROM php

from the official PHP image # Copy index.php Go to the /var/www directory in the container ADD index.php /var/www # Expose port 8080 EXPOSE 8080 # Set the default working directory of the container to /var/www WORKDIR /var/www # The default command executed after the container runs ENTRYPOINT ["php", "-S", "0.0.0.0:8080"












Run this container

docker run -d -p 8080:8080 allovince/php-helloworld to
see the result:

curl localhost:8080
PHP in Docker
This way we have created a Docker container for the demo PHP program, any machine with Docker installed will do Running this container gets the same result. And anyone with the above php file and Dockerfile can build the same container, which completely eliminates all kinds of problems that different environments, different versions may cause.

Imagine that the program is further complicated. How should we expand it? The direct idea is to continue to install other services in the container and run all the services. Then our Dockerfile is likely to develop like this:

FROM php
ADD index .php /var/www
# Install more services
RUN apt-get install -y \
mysql-server \
nginx \
php5-fpm \
php5-mysql
# Write a startup script to start all services
ENTRYPOINT ["/opt/bin/php- nginx-mysql-start.sh"]
Although we built a development environment through Docker, I don't think it feels a bit familiar. Yes, in fact, this approach is similar to making a virtual machine image. There are several problems with this method:

If you need to verify different versions of a service, such as testing PHP5.3/5.4/5.5/5.6, you must prepare 4 images, but each image has only a small difference.
If you start a new project, the services installed inside the container will keep ballooning, and eventually you won't be able to figure out which service belongs to which project.
Using a single process container The
above pattern of putting all services in a container has an unofficial name for it: Fat Container. The opposite is the pattern of splitting services into containers. It can be seen from the design of Docker that only one instruction to start a container can be specified in the process of building an image. Therefore, Docker is naturally suitable for a container to run only one service, which is also officially recommended.

The first problem encountered when splitting services is, where does the base image of each of our services come from? There are two options here:

Option 1. Unify the extension from the standard OS image. For example, the following are Nginx and MySQL images

. FROM ubuntu:14.04
RUN apt-get update -y && apt-get install -y nginx
FROM ubuntu:14.04
RUN apt -get update -y && apt-get install -y mysql
The advantage of this method is that all services can have a unified base image, and the same method can be used to expand and modify the image. For example, if ubuntu is selected, it can be used The apt-get command installs the service.

The problem is that a large number of services need to be maintained by themselves, especially when different versions of a service are required, the source code needs to be compiled directly, and the debugging and maintenance costs are very high.

Option 2. Inherit the official image directly from Docker Hub, the following are also Nginx and MySQL images

FROM nginx:1.9.0
FROM mysql:5.6
Docker Hub can be regarded as Docker's Github. Docker has officially prepared a large number of images of common services, and there are also many images submitted by third parties. You can even build a private Docker Hub in a short time based on the Docker-Registry project.

To build an image based on the official image of a service, there are very rich options, and the version of the service can be switched at a small cost. The problem with this method is that there are various ways to build official images. When extending, you need to understand the Dockerfile of the original image first.

In order to make the service construction more flexible, we choose the latter to build the image.

To split the service, our directory now looks like this:

~/Dockerfiles
├── mysql
│ └── Dockerfile
├── nginx
│ ├── Dockerfile
│ ├── nginx.conf
│ └── sites- enabled
│ ├── default.conf
│ └── evaengine.conf
├── php
│ ├── Dockerfile
│ ├── composer.phar
│ ├── php-fpm.conf
│ ├── php.ini
│ ├─ ─ redis.tgz
└── redis
└── Dockerfile is
to create a separate folder for each service, and put a Dockerfile under each service folder.

The MySQL container
MySQL is inherited from the official MySQL 5.6 image. The Dockerfile has only one line, and no additional processing is required, because the official requirements have been implemented in the image, so the content of the Dockerfile is:

FROM mysql:5.6
in the project root directory Running

docker build -t eva/mysql ./mysql
will automatically download and build the image, here we name it eva/mysql.

Since all database data will be discarded at the end of the container operation, in order not to import data every time, we will use the mount method to persist the MySQL database. The official image stores the database in /var/lib/mysql by default, and requires the container to run You must set an administrator password via an environment variable, so you can run the container with the following command:

docker run -p 3306:3306 -v ~/opt/data/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -it eva /mysql
Through the above instructions, we bind the local port 3306 to the port 3306 of the container, persist the database in the container to the local ~/opt/data/mysql, and set a root password for MySQL 123456

Nginx container
The Nginx configuration file nginx.conf and the project's configuration file default.conf are prepared in advance in the Nginx directory. Dockerfile content is:

FROM nginx:1.9
ADD nginx.conf /etc/nginx/nginx.conf
ADD sites-enabled/* /etc/nginx/conf.d/
RUN mkdir /opt/htdocs && mkdir /opt/log && mkdir /opt/log/nginx
RUN chown - R www-data.www-data /opt/htdocs /opt/log
VOLUME ["/opt"]
Since the official Nginx1.9 is based on Debian Jessie, first copy the prepared configuration file to the specified location and replace the mirror According to personal habits, the /opt/htdocs directory is the root directory of the web server, and the /opt/log/nginx directory is the Nginx Log directory.

Also build the image

docker build -t eva/nginx ./nginx
and run the container

docker run -p 80:80 -v ~/opt:/opt -it eva/nginx
Note that we bind the local port 80 to the container's 80 port, and mount the local ~/opt directory to the container's /opt directory, so that the project source code can be placed in the ~/opt directory and accessed through the container.

PHP container
PHP container is the most complicated one, because in actual projects, we may need to install some PHP extensions separately and use some command line tools. Here we take Redis extension and Composer as examples. First, download the extension and other files needed by the project to the php directory in advance, so that the build can be copied from the local without downloading through the network every time, which greatly speeds up the image construction speed:

wget https://getcomposer.org/composer.phar -O php/composer.phar wget https://pecl.php.net/get/redis-2.2.7.tgz -O php
/redis.tgz
The php configuration files php.ini and php-fpm.conf are ready. The basic image we choose is PHP 5.6-FPM, which is also a Debian Jessie image. The official kindly prepared a docker-php-ext-install command inside the image, which can quickly install common extensions such as GD and PDO. All supported extension names can be obtained by running docker-php-ext-install inside the container.

Take a look at the Dockerfile

FROM php:5.6-fpm
ADD php.ini /usr/local/etc/php/php.ini
ADD php-fpm.conf /usr/local/etc/php-fpm.conf
COPY redis.tgz /home /redis.tgz
RUN docker-php-ext-install gd \
&& docker-php-ext-install pdo_mysql \
&& pecl install /home/redis.tgz && echo "extension=redis.so" > /usr/local/etc/ php/conf.d/redis.ini
ADD composer.phar /usr/local/bin/composer
RUN chmod 755 /usr/local/bin/composer
WORKDIR /opt
RUN usermod -u 1000 www-data
VOLUME ["/opt"]
Do something like this during the build process:

copy the php and php-fpm configuration files to the appropriate directories
Copy the redis extension source code to /home
Install the GD and PDO extensions via docker-php-ext-install Install the Redis extension
via pecl
Copy the composer to the image as a global command
According to personal habits, still set the /opt directory as the working directory.

Here is a detail. When copying the tar file, the Docker command used is COPY instead of ADD, because the ADD command will automatically decompress the tar file.

Now finally build + run:

docker build -t eva/php ./php
docker run -p 9000:9000 -v ~/opt:/opt -it eva/php
In most cases, Nginx and PHP read The source code of the project is the same, so the local ~/opt directory is also mounted here and bound to port 9000.

In addition to running php-fpm, the
php container should also be used as the php cli of the project, so as to ensure the consistency of the php version, extension and configuration file.

For example, to run Composer in a container, you can do it with the following command:

docker run -v $(pwd -P):/opt -it eva/php composer install --dev -vvv
Running this command in any directory is equivalent to dynamically mounting the current directory to the default working directory of the container and running , which is why the PHP container specifies the working directory as /opt.

Similarly, command-line tools such as phpunit, npm, and gulp can also be implemented in the container.

Redis container
In order to facilitate the demonstration, Redis is only used as a cache and has no persistence requirement, so the Dockerfile has only one line of

FROM redis: 3.0
container connection. The
above has split the service originally running in one container into multiple containers, each container Only run a single service. In this way, containers need to be able to communicate with each other. There are two ways to communicate between Docker containers. One is to bind the container port to a local port as above and communicate through the port. The other is through the Linking function provided by Docker. In the development environment, the communication through Linking is more flexible, and some problems caused by port occupation can also be avoided. For example, Nginx and PHP can be linked in the following way:

docker run -p 9000:9000 -v ~/opt:/opt --name php -it eva/php
docker run -p 80:80 -v ~/opt:/opt -it --link php:php eva/nginx
in general php In the project, Nginx needs to link PHP, and PHP needs to link MySQL, Redis, etc. In order to make the interconnection between containers easier to manage, Docker officially recommends using Docker-Compose to complete these operations.

Complete the installation with one line of instructions

pip install -U docker-compose
Then prepare a docker-compose.yml file in the root directory of the Docker project, the content is:

nginx:
build: ./nginx
ports:
- "80:80"
links:
- "php"
volumes:
- ~/opt:/opt
php:
build: ./php
ports:
- "9000:9000"
links:
- "mysql"
- "redis"
volumes:
- ~/opt:/opt
mysql:
build: ./mysql
ports:
- "3306:3306"
volumes:
- ~/opt/data/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123456
redis:
build: ./redis
ports:
- "6379:6379"
Then run docker-compose up to complete all port binding, mounting, and linking operations.

More complex examples The
above is the evolution process of a standard PHP project in the Docker environment. In actual projects, more and more complex services are generally integrated, but the above basic steps can still be used in general. For example, EvaEngine/Dockerfiles is a Docker-based development environment prepared for running my open source project EvaEngine. EvaEngine relies on queue service Gearman, cache service Memcache, Redis, front-end construction tools Gulp, Bower, back-end Cli tools Composer, PHPUnit, etc. The specific implementation method can read the code by yourself.

After team practice, the environment installation that originally took about 1 day, only needs to run more than 10 instructions after switching to Docker, and the time is greatly shortened to less than 3 hours (most of the time is waiting for download). The most important thing is that Docker has The built environment is 100% consistent and free from problems caused by human error. In the future, we will further apply Docker to CI and production environments.





The original release time is: 2015-07-01



This article is from the Yunqi community partner "Linux China"


using the Yunqi community APP, comfortable~

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326183635&siteId=291194637