The most comprehensive Docker-compose application deployment! Collect quickly!

We have collected files deployed by most applications using docker-compose, please check them out!

Make docker-compose easy!

Everything you need is available (all are mainstream, not necessarily niche ones), leave a message in the comment area if you don’t have any, hey hey hey!

Please correct me if there are any shortcomings, thank you!

Here are the steps to run a service using docker-compose:

  1. Create a file nameddocker-compose.yml and define the service in it. For example:
version: '3'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"   
  1. Run the following command in the directory containing the docker-compose.yml file to start the service:
docker-compose up -d    

This will start the service in background mode. If everything is OK, you will see output similar to the following:

[+] Running 1/1
 ✔ Container nginx  Started                                                                                                  0.3s
  1. To stop the service, run the following command in the directory containing the docker-compose.yml file:
docker-compose down    

1. Web and Application Servers

1.1 Nginx

Nginx is a high-performance open source web server and reverse proxy server.

version: '3'
services:
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/html:/usr/share/nginx/html
  • version: '3': Specifies the version of the Docker Compose file, '3' is the latest version of Docker Compose.
  • services:: This section defines the service to be created in the Docker environment. Each service represents a Docker container.
  • nginx:: This is the name of the service, indicating that a container named "nginx" is to be created.
  • image: nginx:latest: Specifies the Docker image to be used, i.e. nginx:latest, which is the latest version of the Nginx web server.
  • container_name: nginx: Sets the name of the container so that it is named "nginx" when created.
  • ports:: This section defines the port mapping. Here, it maps the host’s port 80 to the container’s port 80. This means that when accessing port 80 of the host machine in a browser, the request will be routed to port 80 of the container, which is the port that the Nginx server listens on.
  • volumes:: This section defines the volume mounts. Here, it mounts the host's ./nginx/conf directory to the container's /etc/nginx/conf.d directory. This means that the Nginx configuration files in the ./nginx/conf directory can be updated on the host machine and the changes will be applied to the Nginx server after restarting the container. Additionally, the host's ./nginx/html directory is mounted to the container's /usr/share/nginx/html directory, which means that the ./nginx/html directory can be updated on the host web page files in the container and apply the changes to the default web page directory of the Nginx server after restarting the container.

Create an Nginx configuration file named in the ./nginx/conf directory. default.conf

server {
    listen 80;
    server_name localhost;

    location / {
        root /usr/share/nginx/html;
        index index.html;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

This configuration file defines a simple Nginx server, listens to port 80 of the host, maps requests to the /usr/share/nginx/html directory, and defines the processing of error pages.

Create a static HTML file named in the ./nginx/html directory. HTML content can be written as required. index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Welcome to My Nginx Server</title>
    <style>
        body {
            font-family: 'Arial', sans-serif;
            text-align: center;
            margin: 100px;
        }

        h1 {
            color: #333;
        }

        p {
            color: #666;
        }
    </style>
</head>
<body>
    <h1>Welcome to My Nginx Server</h1>
    <p>This is a simple HTML page served by Nginx.</p>
</body>
</html>
1.2 Apache
version: '3'
services:
  apache:
    image: httpd:latest
    container_name: apache
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/local/apache2/htdocs
  • version: '3': Specifies the version of the Docker Compose file, '3' is the latest version of Docker Compose.
  • services:: This section defines the service to be created in the Docker environment. Each service represents a Docker container.
  • apache:: This is the name of the service, indicating that a container named "apache" is to be created.
  • image: httpd:latest: Specifies the Docker image to use, i.e. httpd:latest, which is the latest version of the Apache HTTP server.
  • container_name: apache: Sets the name of the container so that it is named "apache" when created.
  • ports:: This section defines the port mapping. Here, it maps the host’s port 80 to the container’s port 80. This means that when accessing port 80 of the host machine in a browser, the request will be routed to port 80 of the container, which is the port that the Apache HTTP server listens on.
  • volumes:: This section defines the volume mounts. Here, it mounts the host's ./html directory to the container's /usr/local/apache2/htdocs directory. This means that the contents of the ./html directory can be updated on the host machine and have the changes applied to the Apache server's default web directory after restarting the container.
1.3 Tomcat
version: '3'
services:
  tomcat:
    image: tomcat:latest
    container_name: tomcat
    ports:
      - "8080:8080"
    volumes:
      - ./webapps:/usr/local/tomcat/webapps
    environment:
      - CATALINA_OPTS=-Xmx512m
  • environment: - CATALINA_OPTS=-Xmx512m: This sets the container's environment variables. In this case, set the CATALINA_OPTS environment variable to -Xmx512m. This specifies a maximum heap memory size of 512MB for the Java Virtual Machine (JVM). This can help control the maximum amount of memory the Tomcat server can use.
1.4 Lighttpd
version: '3'
services:
  lighttpd:
    image: lighttpd:latest
    container_name: lighttpd
    ports:
      - "80:80"
    volumes:
      - ./lighttpd.conf:/etc/lighttpd/lighttpd.conf
      - ./html:/var/www/html
  • "80:80"`: Maps port 80 of the host to port 80 of the container, which is a common web server port.
  • ./lighttpd.conf:/etc/lighttpd/lighttpd.conf`: Mount the local "lighttpd.conf" file to the "/etc/lighttpd/lighttpd.conf" path in the container to configure the Lighttpd service.
  • - ./html:/var/www/html: Mount the local "html" directory to the "/var/www/html" path within the container, which is the default document root directory of the web server.

2. Database

2.1 mysql
version: '3.0'
services:
  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: 123456
    command:
      --default-authentication-plugin=mysql_native_password
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_general_ci
      --explicit_defaults_for_timestamp=true
      --lower_case_table_names=1
      --sql-mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
    ports:
      - 3306:3306
    volumes:
      - ./log:/var/log/mysql
      - ./data:/var/lib/mysql
      - ./conf:/etc/mysql
  • MYSQL_ROOT_PASSWORD: 123456: Set the environment variable MYSQL_ROOT_PASSWORD to "123456", which represents the password of the MySQL root user.
  • command:: Specify the command to be executed when the container starts. Here are some MySQL configuration options, including the default authentication plug-in, character set and collation, timestamp defaults, case sensitivity, SQL mode, etc.
  • ports:: Specify the port mapping of the container.
  • - 3306:3306: Map the host's 3306 port to the container's 3306 port, which is the default port of MySQL.
  • volumes:: Specify the mounting relationship between the directory on the local file system and the path within the container.
  • - ./log:/var/log/mysql: Mount the local "log" directory to the "/var/log/mysql" path in the container to store MySQL log files.
  • - ./data:/var/lib/mysql: Mount the local "data" directory to the "/var/lib/mysql" path in the container to store MySQL data files.
  • - ./conf:/etc/mysql: Mount the local "conf" directory to the "/etc/mysql" path in the container to store MySQL configuration files.
2.2 PostgreSQL
version: '3'
services:
  db:
    image: postgres:13.8
    container_name: postgres
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: 123456
      POSTGRES_DB: postgres
    ports:
      - "5433:5432"
    volumes:
      - ./data:/var/lib/postgresql/data
  • version: '3': Specify the version of the Docker Compose file as 3.
  • services:: Specify the service to be deployed.
  • db:: Define a service named "db", which will be the PostgreSQL database server.
  • image: postgres:13.8: The specified PostgreSQL image version is 13.8.
  • container_name: postgres: Specify the name of the container as "postgres".
  • restart: always: Set the container to automatically restart after exiting.
  • environment:: Specify the environment variables of the container.
  • POSTGRES_USER: postgres: Set the environment variable POSTGRES_USER to "postgres", which represents the user name of PostgreSQL.
  • POSTGRES_PASSWORD: 123456: Set the environment variable POSTGRES_PASSWORD to "123456", which represents the PostgreSQL password.
  • POSTGRES_DB: postgres: Set the environment variable POSTGRES_DB to "postgres", indicating the database name to be used.
  • ports:: Specify the port mapping of the container.
  • - "5433:5432": Maps the host's 5433 port to the container's 5432 port, which is the default port for PostgreSQL.
  • volumes:: Specify the mounting relationship between the directory on the local file system and the path within the container.
  • - ./data:/var/lib/postgresql/data: Mount the local "data" directory to the "/var/lib/postgresql/data" path in the container, which is the default data storage path for PostgreSQL.
2.3 Oracle
version: '3.0'
services:
  oracle:
    image: wnameless/oracle-xe-11g-r2
    container_name: oracle
    ports:
      - "1521:1521"
    environment:
      - ORACLE_ALLOW_REMOTE=true
  • environment:: Specify the environment variables of the container.
  • - ORACLE_ALLOW_REMOTE=true: Set the environment variable ORACLE_ALLOW_REMOTE to true to allow remote access to the Oracle database.
2.4 MongoDB
version: '3.0'
services:
  mongodb:
    image: mongo:latest
    container_name: mongodb
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    volumes:
      - ./data:/data/db
    ports:
      - "27017:27017"
  • image: mongo:latest: Specify the MongoDB image to use, here is the latest version.
  • container_name: mongodb: Specify the name of the container as "mongodb".
  • environment:: Specify the environment variables of the container.
  • MONGO_INITDB_ROOT_USERNAME: root: Set the environment variable MONGO_INITDB_ROOT_USERNAME to "root", which represents the root user name of MongoDB.
  • MONGO_INITDB_ROOT_PASSWORD: 123456: Set the environment variable MONGO_INITDB_ROOT_PASSWORD to "123456", which represents the password of the MongoDB root user.
  • volumes:: Specify the mounting relationship between the directory on the local file system and the path within the container.
  • - ./data:/data/db: Mount the local "data" directory to the "/data/db" path in the container, which is the default data storage path of MongoDB.
  • ports:: Specify the port mapping of the container.
  • - "27017:27017": Map the host's 27017 port to the container's 27017 port, which is the default port of MongoDB.
2.5 sqlserver
version: '3.0'
services:
  db:
    image: mcr.microsoft.com/mssql/server:2017-latest
    restart: always
    container_name: sqlserver
    environment:
      ACCEPT_EULA: Y
      SA_PASSWORD: 123456
    ports:
      - 1433:1433
    volumes:
      - ./mssql:/var/opt/mssql
  • db:: Define a service named "db", which will be the SQL Server database server.
  • image: mcr.microsoft.com/mssql/server:2017-latest: Specify the SQL Server image used, here is the latest version of the 2017 version.
  • restart: always: Automatically restart the container after exiting.
  • container_name: sqlserver: Specify the name of the container as "sqlserver".
  • environment:: Specify the environment variables of the container.
  • ACCEPT_EULA: Y: Set the environment variable ACCEPT_EULA to Y to indicate acceptance of the End User License Agreement.
  • SA_PASSWORD: 123456: Set the environment variable SA_PASSWORD to 123456, which is the password of the SQL Server system administrator (SA) account.
  • ports:: Specify the port mapping of the container.
  • - 1433:1433: Map the host's 1433 port to the container's 1433 port, which is the default port of SQL Server.
  • volumes:: Specify the mounting relationship between the directory on the local file system and the path within the container.
  • - ./mssql:/var/opt/mssql: Mount the local "mssql" directory to the "/var/opt/mssql" path in the container.

3. Message queue and event-driven system

3.1 ActiveMQ
version: '3'
services:
  activemq:
    image: rmohr/activemq:latest
    container_name: my_activemq
    ports:
      - "61616:61616"
      - "8161:8161"
    volumes:
      - ./data:/var/lib/activemq

defines a service named activemq, using the rmohr/activemq mirror. This image includes ActiveMQ and maps the host's ports 61616 (for messaging) and 8161 (for the management interface) to the corresponding ports in the container. A volume is also mounted that maps the ActiveMQ data directory to the ./data directory on the host.

3.2 RabbitMQ
version: '3'
services:
  activemq:
    image: rmohr/activemq:latest
    container_name: my_activemq
    ports:
      - "61616:61616"
      - "8161:8161"
    volumes:
      - ./data:/var/lib/activemq

defines a service named rabbitmq, using the official RabbitMQ image. Maps the host's ports 5672 (for AMQP) and 15672 (for the RabbitMQ management interface) to the corresponding ports in the container. A volume is also mounted that maps the RabbitMQ data directory to the ./data directory on the host.

3.3 Apache Kafka
version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper:latest
    container_name: my_zookeeper
    ports:
      - "2181:2181"

  kafka:
    image: wurstmeister/kafka:latest
    container_name: my_kafka
    ports:
      - "9092:9092"
    expose:
      - "9093"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "my-topic:1:1"

defines two services, zookeeper and kafka. zookeeper used the Zookeeper image provided by wurstmeister, kafka used the Kafka image provided by wurstmeister. zookeeper The service listens on the host on port 2181, kafka The service listens on the host on port 9092 (for external access). Some environment variables are also set to configure Kafka. Note the KAFKA_CREATE_TOPICS variable, which is used to create a topic named my-topic on startup.

3.4 NATS
version: '3'
services:
  nats-server:
    image: nats:latest
    container_name: my-nats-container
    ports:
      - "4222:4222"
      - "6222:6222"
      - "8222:8222"

  nats-publisher:
    image: nats:latest
    container_name: my-nats-publisher
    command: nats-pub -s nats://nats-server:4222 my_subject "Hello NATS!"
    depends_on:
      - nats-server
  • nats-serverThe service uses the official NATS image and maps the container's 4222, 6222, and 8222 ports to the host. These ports are used for client connections, cluster communications, and monitoring.
  • nats-publisherThe service also uses NATS mirroring and depends on the nats-server service. Its command is to publish a message to the NATS server using nats-pub at startup.

4. Caching and in-memory data storage

4.1 Redis
version: '3'
services:
  redis:
    image: redis:latest
    container_name: my-redis-container
    ports:
      - "6379:6379"
    volumes:
      - ./redis-data:/data
    command: redis-server --appendonly yes
    environment:
      - REDIS_REPLICATION_MODE=master
      - REDIS_PASSWORD=123456
  • image: redis:latestSpecify to use the official Redis image.
  • container_name: my-redis-containerSpecify the name of the Redis container.
  • ports: - "6379:6379"Map the container's 6379 port to the host's 6379 port.
  • volumes: - ./redis-data:/dataMap the Redis data directory to the host to achieve data persistence.
  • command: redis-server --appendonly yesSet the command when starting the Redis container and enable persistence.
  • environment: - REDIS_REPLICATION_MODE=master - REDIS_PASSWORD=123456Set the environment variables of the Redis container, including setting the password and master-slave replication mode.
4.2 Memcached
version: '3'
services:
  memcached:
    image: memcached:latest
    container_name: my-memcached-container
    ports:
      - "11211:11211"
    environment:
      - MEMCACHED_MEMORY_LIMIT=64 # 设置内存限制为64MB
      - MEMCACHED_LOG_FILE=/var/log/memcached/memcached.log
      - MEMCACHED_LOG_LEVEL=-vv # 设置日志级别为详细
    volumes:
      - ./memcached-data:/data
      - ./memcached-logs:/var/log/memcached
  • memcachedThe service uses the official Memcached image and maps the container's 11211 port to the host.
  • environmentThe section contains some environment variables, such as MEMCACHED_MEMORY_LIMIT for setting the memory limit, MEMCACHED_LOG_FILE for specifying the path of the log file, < a i=3> is used to set the log level. MEMCACHED_LOG_LEVEL
  • volumesThe part maps the /data directory in the container to the ./memcached-data directory on the host, and maps the /var/log/memcached directory to The./memcached-logs directory on the host implements persistence of data and logs.

5. Distributed file system

5.1 FastDFS

FastDFS is an open source distributed file system designed to provide high-performance, high-reliability file storage solutions. Usually used for distributed storage of large amounts of small files, such as pictures, audio and video.

version: '3.3'
services:
  tracker:
    image: season/fastdfs:1.2
    container_name: tracker
    network_mode: host
    restart: always
    ports:
      - "22122:22122"
    command: "tracker"

  storage:
    image: season/fastdfs:1.2
    container_name: storage
    network_mode: host
    restart: always
    volumes:
      - "./storage.conf:/fdfs_conf/storage.conf"
      - "./storage_base_path:/fastdfs/storage/data"
      - "./store_path0:/fastdfs/store_path"
    environment:
      TRACKER_SERVER: "192.168.1.100:22122"
    command: "storage"

  nginx:
    image: season/fastdfs:1.2
    container_name: fdfs-nginx
    restart: always
    network_mode: host
    volumes:
      - "./nginx.conf:/etc/nginx/conf/nginx.conf"
      - "./store_path0:/fastdfs/store_path"
    environment:
      TRACKER_SERVER: "192.168.1.100:22122"
    command: "nginx"

Configuration file

storage.conf


# the name of the group this storage server belongs to
group_name=group1

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=

# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port
port=23000

# connect timeout in seconds
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60

# heart beat interval in seconds
heart_beat_interval=30

# disk usage report interval in seconds
stat_report_interval=60

# the base path to store data and log files
base_path=/fastdfs/storage

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500

# path(disk or mount point) count, default value is 1
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/fastdfs/store_path
#store_path1=/home/yuqing/fastdfs2

# subdir_count  * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=192.168.1.100:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0

# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0

# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=

tracker_server=192.168.1.100:22122

# the port of the web server on this storage server
http.server_port=8888

nginx.conf

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       8088;
        server_name  localhost;

        #charset koi8-r;

        #缩略图需要使用插件,需要单独构建nginx镜像,此处忽略
        #location /group([0-9])/M00/.*\.(gif|jpg|jpeg|png)$ {
         #   root /fastdfs/storage/data;
         #   image on;
         #   image_output off;
         #   image_jpeg_quality 75;
         #   image_backend off;
        #    image_backend_server http://baidu.com/docs/aabbc.png;
       # }

        # group1
        location /group1/M00 {
        # 文件存储目录
            root /fastdfs/storage/data;
            ngx_fastdfs_module;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
 }
}

5.2 GlusterFS

GlusterFS is an open source distributed file system that allows multiple storage servers to be aggregated into a unified storage pool, providing high availability and scalability.

version: '3'
services:
  glusterfs-server-1:
    image: gluster/gluster-centos
    container_name: glusterfs-server-1
    privileged: true
    command: /usr/sbin/init
    network_mode: host
    volumes:
      - /data/brick1

  glusterfs-server-2:
    image: gluster/gluster-centos
    container_name: glusterfs-server-2
    privileged: true
    command: /usr/sbin/init
    network_mode: host
    volumes:
      - /data/brick2

  glusterfs-client:
    image: gluster/gluster-centos
    container_name: glusterfs-client
    privileged: true
    command: /usr/sbin/init
    network_mode: host
    depends_on:
      - glusterfs-server-1
      - glusterfs-server-2
    volumes:
      - /mnt/glusterfs

networks:
  default:
    external:
      name: host

defines three services: glusterfs-server-1, glusterfs-server-2, and glusterfs-client. The two glusterfs-server services represent two GlusterFS storage nodes, and the glusterfs-client service is a GlusterFS client. The Docker image officially provided by GlusterFS was used and setprivileged: true to obtain the necessary permissions.

5.3 MooseFS

MooseFS is an open source distributed file system designed to provide high-performance, high-availability distributed storage solutions.

version: '3'
services:
  # MooseFS Master节点
  mfsmaster:
    image: moosefs/moosefs:latest
    container_name: mfsmaster
    command: /bin/bash -c "mfsmaster start && mfschunkmaster start && tail -f /dev/null"
    ports:
      - "9419:9419"
    networks:
      - moosefs_net
    restart: always

  # MooseFS Metadata服务器节点
  mfsmeta:
    image: moosefs/moosefs:latest
    container_name: mfsmeta
    command: /bin/bash -c "mfsmetarestore start && tail -f /dev/null"
    networks:
      - moosefs_net
    restart: always

  # MooseFS Chunk服务器节点
  mfschunk:
    image: moosefs/moosefs:latest
    container_name: mfschunk
    command: /bin/bash -c "mfschunkserver start && tail -f /dev/null"
    networks:
      - moosefs_net
    restart: always

  # MooseFS CGI服务器节点
  mfscgiserv:
    image: moosefs/moosefs:latest
    container_name: mfscgiserv
    command: /bin/bash -c "mfscgiserv start && tail -f /dev/null"
    networks:
      - moosefs_net
    restart: always

networks:
  moosefs_net:

mfsmaster:

  • image: Use the moosefs/moosefs image, which contains the master node (Master) of MooseFS.
  • container_name: Set the name of the container to mfsmaster.
  • command: Start the MooseFS Master node and Chunk Master node, and tail -f /dev/null to keep the container running.
  • ports: Map the host's 9419 port to the container's 9419 port for MooseFS monitoring.
  • networks: Use a custom network named moosefs_net.
  • restart: Always restart when the container exits.

mfsmeta:

  • Same as above, this is the configuration of the Metadata server node of MooseFS.

mfschunk:

  • Same as above, this is the configuration of the MooseFS Chunk server node.

mfscgiserv:

  • Same as above, this is the configuration of MooseFS's CGI server node.

networks:

  • Define a custom networkmoosefs_net to connect various nodes of MooseFS.
5.4 Ceph

Ceph is an open source distributed storage system with high performance, high availability and scalability.

version: '3'
services:
  mon:
    image: ceph/daemon:latest-luminous
    container_name: ceph-mon
    net: host
    environment:
      - CEPH_DAEMON=mon
      - MON_IP=192.168.1.100
      - CEPH_PUBLIC_NETWORK=your_public_network
    volumes:
      - /var/lib/ceph/mon/ceph-mon:/var/lib/ceph/mon/ceph-mon
    restart: always

  mgr:
    image: ceph/daemon:latest-luminous
    container_name: ceph-mgr
    net: host
    environment:
      - CEPH_DAEMON=mgr
      - MON_IP=192.168.1.100
    restart: always

  osd:
    image: ceph/daemon:latest-luminous
    container_name: ceph-osd
    net: host
    privileged: true
    environment:
      - OSD_DEVICE=/dev/sdb
      - OSD_FORCE_ZAP=1
      - MON_IP=192.168.1.100
    volumes:
      - /var/lib/ceph/osd/ceph-osd:/var/lib/ceph/osd/ceph-osd
    restart: always

networks:
  host_net:
    external: true
    name: host

defines three services: mon, mgr, and osd. These services represent Ceph's Monitor, Manager and Object Storage Daemon respectively.

In this configuration, the Docker image officially provided by Ceph is used, and the corresponding environment variables are set. Please replace 192.168.1.100 and your_public_network with the IP address and public network information of your Monitor node.

  • mon:
    • Useceph/daemon mirror to represent the Monitor node.
    • Set MON_IP is the IP address of the Monitor node.
    • volumesStore the data of the Monitor node in the host's /var/lib/ceph/mon/ceph-mon directory.
  • mgr:
    • Useceph/daemon mirror to represent the Manager node.
    • Set MON_IP is the IP address of the Monitor node.
  • osd:
    • Useceph/daemon mirror to represent the Object Storage Daemon node.
    • Set OSD_DEVICE to the device you want to use for storage (replace with the actual device path).
    • volumesStore the data of the OSD node in the host's /var/lib/ceph/osd/ceph-osd directory.

6. Search and indexing services

6.1 Elasticsearch
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
    container_name: my-elasticsearch-container
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
      - "9300:9300"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 2g
    volumes:
      - ./elasticsearch-data:/usr/share/elasticsearch/data
  • elasticsearchThe service uses Elasticsearch's official Docker image and maps the container's 9200 and 9300 ports to the host.
  • environmentThe section contains some environment variables, of which discovery.type=single-node indicates configuring Elasticsearch as a single node.
  • ulimitsThe and mem_limit sections are used to set the memory lock (mlockall), which is a recommended setting for Elasticsearch.
  • volumesThe part maps the /usr/share/elasticsearch/data directory in the container to the ./elasticsearch-data directory on the host to achieve data persistence.
6.2 Solr
version: '3'
services:
  solr:
    image: solr:8.11.0
    container_name: my-solr-container
    ports:
      - "8983:8983"
    environment:
      - SOLR_CORE=mycore
    volumes:
      - ./solr-data:/opt/solr/server/solr/mycore/data
  • solrThe service uses Solr's official Docker image and maps the container's 8983 port to the host.
  • environmentThe section contains an environment variable SOLR_CORE that specifies the Solr core (index) name. In this example, the core name is mycore.
  • volumesThe part maps the /opt/solr/server/solr/mycore/data directory in the container to the ./solr-data directory on the host to achieve data persistence.

7. Service discovery

7.1 Nacos
version: '3'
services:
  nacos-server:
    image: nacos/nacos-server:latest
    ports:
      - "8848:8848"
    environment:
      - PREFER_HOST_MODE=hostname
    command: -e embeddedStorage=true

  nacos-mysql:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=root
      - MYSQL_DATABASE=nacos
      - MYSQL_USER=nacos
      - MYSQL_PASSWORD=nacos
    command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci

  nacos-redis:
    image: redis:latest
    ports:
      - "6379:6379"

  nacos-config:
    image: nacos/nacos-config:latest
    ports:
      - "8849:8849"
    environment:
      - PREFER_HOST_MODE=hostname
      - SPRING_DATASOURCE_PLATFORM=mysql
      - MYSQL_SERVICE_HOST=nacos-mysql
      - MYSQL_SERVICE_PORT=3306
      - MYSQL_SERVICE_DB_NAME=nacos
      - REDIS_SERVICE_HOST=nacos-redis
      - REDIS_SERVICE_PORT=6379
      - NACOS_SERVER_ADDR=nacos-server:8848

  nacos-console:
    image: nacos/nacos-console:latest
    ports:
      - "8849:8849"
    environment:
      - SPRING_DATASOURCE_PLATFORM=mysql
      - MYSQL_SERVICE_HOST=nacos-mysql
      - MYSQL_SERVICE_PORT=3306
      - MYSQL_SERVICE_DB_NAME=nacos
      - REDIS_SERVICE_HOST=nacos-redis
      - REDIS_SERVICE_PORT=6379
      - NACOS_SERVER_ADDR=nacos-server:8848

nacos-serverServe:

  • Use nacos/nacos-server:latest image, which contains Nacos server components.
  • Expose the Nacos server to port 8848 of the host.
  • The PREFER_HOST_MODE=hostname environment variable is set, using the host name as the method of service registration.
  • Use the -e embeddedStorage=true command to enable embedded storage.

nacos-mysqlServe:

  • Use official mysql:5.7 mirror.
  • Set MySQL-related environment variables, including username, password, etc.

nacos-redisServe:

  • Use official redis:latest mirror.
  • Expose the Redis service to port 6379 of the host.

nacos-configServe:

  • Use nacos/nacos-config:latest image, which contains the Nacos configuration center component.
  • Expose the Nacos configuration center to port 8849 of the host.
  • Set up connection information with MySQL, Redis, and Nacos servers.

nacos-consoleServe:

  • Use nacos/nacos-console:latest image, which contains the Nacos console component.
  • Expose the Nacos console to the host's port 8849.
  • Set up connection information with MySQL, Redis, and Nacos servers.
7.2 Consul

Provided by HashiCorp, it is an open source service discovery and configuration tool.

version: '3'
services:
  consul:
    image: consul:latest
    container_name: my-consul
    command: agent -server -bootstrap-expect=1 -ui -client=0.0.0.0
    ports:
      - "8500:8500"
      - "8600:8600/udp"
    volumes:
      - ./consul-data:/consul/data
    networks:
      - consul-net

networks:
  consul-net:
    driver: bridge
  • version: '3': Specify the version of the Docker Compose file.
  • services: Defines the list of services to be deployed.
  • consul: Defines a service named consul.
    • image: consul:latest: Using Consul's official Docker image, you can specify a specific version if needed.
    • container_name: my-consul: Set the name of the Docker container to my-consul.
    • command: agent -server -bootstrap-expect=1 -ui -client=0.0.0.0: Configure the startup command of Consul Agent. Here it is configured in Consul server (-server) mode and the Web UI (-ui) is enabled. -bootstrap-expect=1 indicates that this is a single-node Consul cluster.
    • ports: Map the port in the container to the host. Port 8500 is used for Consul's Web UI, and port 8600 is used for DNS queries.
    • volumes: Map the data directory in the container to the host to ensure Consul data persistence.
    • networks: Defines a custom network named consul-net.
  • networks: Configure a custom network consul-net to ensure that the Consul container can communicate over this network.
7.3 ZooKeeper

Zookeeper is an open source distributed coordination service that provides a simple yet powerful coordination system for distributed applications.

version: '3'

services:
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    container_name: zookeeper
    ports:
      - "2181:2181"  # 映射 Zookeeper 默认端口到主机
    environment:
      ZOO_MY_ID: 1  # 设置 Zookeeper 节点的 ID
      ZOO_SERVERS: server.1=zookeeper:2888:3888  # 定义 Zookeeper 集群中的服务器列表
    volumes:
      - ./data:/data  # 映射数据目录到主机
      - ./datalog:/datalog  # 映射事务日志目录到主机
    restart: always  # 始终在容器退出时重新启动

networks:
  default:
    external:
      name: zookeeper-net  # 使用外部创建的网络
7.4 Etcd

Etcd is a highly available key-value storage system mainly used for configuration sharing and service discovery.

version: '3'
services:
  etcd:
    image: quay.io/coreos/etcd:v3.4.13
    container_name: etcd
    command: ["etcd", "--data-dir=/etcd-data", "--name=etcd-node1", "--advertise-client-urls=http://0.0.0.0:2379", "--listen-client-urls=http://0.0.0.0:2379", "--initial-advertise-peer-urls=http://0.0.0.0:2380", "--listen-peer-urls=http://0.0.0.0:2380", "--initial-cluster=etcd-node1=http://etcd:2380", "--initial-cluster-token=etcd-cluster-token", "--initial-cluster-state=new"]
    ports:
      - "2379:2379"
      - "2380:2380"
    volumes:
      - ./etcd-data:/etcd-data
    restart: always

networks:
  default:
    external:
      name: etcd-net
  • image: The Etcd image version used. In this example, quay.io/coreos/etcd:v3.4.13 is used. Other versions can be selected as needed.

  • container_name: Specify the name of the container to facilitate referencing the container.

  • command: Specify command line parameters for Etcd runtime, including data directory, node name, advertising and listening addresses, initial cluster configuration, etc.

  • ports: Map Etcd default port to host.

  • volumes: Map the etcd data directory to the host to ensure that the data remains on the host after the container is stopped or deleted.

  • restart: Configure the container's restart policy to ensure that the etcd service is restarted when the container exits.

  • networks: An external network etcd-net is defined to ensure that containers can communicate with each other.

8. Log collection and analysis

8.1 Fluentd

Fluentd is an open source tool for data collection and log forwarding.

version: '3'
services:
  fluentd:
    image: fluent/fluentd:latest
    container_name: fluentd
    volumes:
      - ./fluentd/conf:/fluentd/etc
    environment:
      - FLUENTD_CONF=fluent.conf
      - TZ=UTC
    ports:
      - "24224:24224"
      - "24224:24224/udp"
  • fluentdThe service uses Fluentd’s official Docker image to run Fluentd.
  • volumes maps the host's ./fluentd/conf directory to the /fluentd/etc directory within the container. This is to place the Fluentd configuration file fluent.conf on the host for custom configuration.
  • environment sets some environment variables, including FLUENTD_CONF, which specifies the Fluentd configuration file name, and TZ, which sets the time zone.
  • portsThe port mapping relationship between the container and the host is defined, including mapping the host's 24224 port to the container's 24224 port, and supporting the UDP protocol.

Create a configuration file in the ./fluentd/conf directory, which can be customized according to actual needs. fluent.conf

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match **>
  @type stdout
</match>
8.2 Logstash

Logstash is one of the components in the Elastic Stack and is used for data processing and forwarding.

version: '3'
services:
  logstash:
    image: docker.elastic.co/logstash/logstash:7.10.0
    container_name: logstash
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
      - ./logstash/config:/usr/share/logstash/config
    ports:
      - "5000:5000"
    environment:
      - "LS_JAVA_OPTS=-Xmx256m -Xms256m"
  • logstashThe service uses the official Docker image of Logstash provided by Elastic for running Logstash.
  • volumes Map the host's ./logstash/pipeline directory to the /usr/share/logstash/pipeline directory in the container to store the Logstash configuration file, and ./logstash/config The directory is mapped to the /usr/share/logstash/config directory within the container, which is used to store Logstash configuration.
  • portsThe port mapping relationship between the container and the host is defined, and the host's 5000 port is mapped to the container's 5000 port.
  • environmentSet the Java running parameters of Logstash.

Create a Logstash configuration file in the ./logstash/pipeline directory, for example logstash.conf.

input {
  tcp {
    port => 5000
  }
}

filter {
  # 添加需要的过滤规则
}

output {
  stdout { codec => rubydebug }
}
8.3 Graylog

Graylog is an open source tool for log management and analysis. It provides powerful log aggregation, search, visualization and analysis functions, designed to help users process and understand large amounts of log data more easily. Deploying Graylog typically involves multiple components, including a Graylog server, a MongoDB database, and Elasticsearch.

version: '3'
services:
  # MongoDB
  mongo:
    image: mongo:4.4
    container_name: mongo
    volumes:
      - ./mongo/data:/data/db
    environment:
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=admin_password
      - MONGO_INITDB_DATABASE=graylog

  # Elasticsearch
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
    container_name: elasticsearch
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
      - cluster.name=graylog
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

  # Graylog
  graylog:
    image: graylog/graylog:4.2
    container_name: graylog
    depends_on:
      - mongo
      - elasticsearch
    ports:
      - "9000:9000"
      - "12201:12201/udp"
    environment:
      - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0
    links:
      - mongo:mongo
      - elasticsearch:elasticsearch
  • mongoThe service uses MongoDB's official Docker image to store Graylog data.
  • elasticsearchThe service uses Elasticsearch's official Docker image to store Graylog indexes.
  • graylogThe service uses Graylog's official Docker image, including Graylog server and web interface.

Please note:

  • In the mongo service, MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD and MONGO_INITDB_DATABASE respectively set up MongoDB Administrator user name, password and database name.
  • In the elasticsearch service, discovery.type is set to single-node, indicating that Elasticsearch will run in single-node mode.
  • In the graylog service, depends_on specifies dependent services (mongo and elasticsearch), make sure they start before Graylog starts.
  • Graylog's username and password are set with GRAYLOG_ROOT_PASSWORD_SHA2 and a SHA-256 hash of the password can be generated by using echo -n yourpassword | shasum -a 256 .
8.4 Kibana

Kibana is a component in the Elastic Stack for data visualization and analysis.

docker-compose.ymldocument

version: '3'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:7.14.0
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    ports:
      - "5601:5601"
  • kibanaThe service uses the official Docker image of Kibana provided by Elastic to run Kibana.
  • environment sets the Kibana environment variable, where ELASTICSEARCH_HOSTS specifies the address and port of Elasticsearch.
  • portsThe port mapping relationship between the container and the host is defined, and the host's 5601 port is mapped to the container's 5601 port.

9. Monitoring system and alarm services

9.1 Prometheus

Prometheus is a powerful monitoring tool that can be used to collect and store system indicators and provides a flexible query language PromQL. It also supports alerts and graphical dashboards, making real-time monitoring of system performance easier.

docker-compose.ymldocument

version: '3'
services:
  prometheus:
    image: prom/prometheus:v2.30.0
    container_name: prometheus
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
      - "--storage.tsdb.path=/prometheus"
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./prometheus/data:/prometheus
  • prometheus The service uses the official Docker image of Prometheus prom/prometheus:v2.30.0.
  • container_name specifies the container name prometheus.
  • commandThe field specifies the command parameters when Prometheus is started, including the configuration file path and storage path.
  • portsThe port mapping relationship between the container and the host is defined, and the host's 9090 port is mapped to the container's 9090 port.
  • volumes maps ./prometheus/prometheus.yml files on the host to the container's /etc/prometheus/prometheus.yml, and maps ./prometheus/data directories on the host to Container/prometheus, used to store Prometheus data.

Create a configuration file in the ./prometheus directory to configure the monitoring targets and rules of Prometheus. prometheus.yml

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']
  # Add additional scrape configurations as needed
  • scrape_intervalThe interval for Prometheus to crawl metrics is specified to be 15 seconds.
  • scrape_configsThe section defines the crawl configuration. In this example, only metrics for Prometheus itself are monitored. You can add more job and targets as needed.
9.2 Grafana

Grafana is an open source data visualization and monitoring platform that supports multiple data sources and provides rich charting and dashboard functions.

version: '3'
services:
  grafana:
    image: grafana/grafana:8.1.5
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - ./grafana/data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning
      - ./grafana/dashboards:/var/lib/grafana/dashboards
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
  • grafana The service uses Grafana’s official Docker image grafana/grafana:8.1.5.
  • container_name specifies the container name grafana.
  • portsThe port mapping relationship between the container and the host is defined, and the host's port 3000 is mapped to the container's port 3000.
  • volumes Map the ./grafana/data directory on the host to the container's /var/lib/grafana for storing Grafana data; map the ./grafana/provisioning directory Map to the container's /etc/grafana/provisioning for configuring data sources and dashboards; map the ./grafana/dashboards directory to the container's /var/lib/grafana/dashboards for Store customized dashboards.
  • environment sets the Grafana environment variable, where GF_SECURITY_ADMIN_PASSWORD specifies the password of the administrator user.

Create a data source configuration file in the ./grafana/provisioning/datasources directory:prometheus.yml

apiVersion: 1

datasources:
- name: Prometheus
  type: prometheus
  access: proxy
  url: http://prometheus:9090
  basicAuth: false
  isDefault: true
  editable: true
9.3 Zabbix

Zabbix is ​​an open source software used to monitor and manage networks, servers, virtual machines and various network devices.

version: '3'
services:
  zabbix-server:
    image: zabbix/zabbix-server-pgsql:alpine-5.4-latest
    container_name: zabbix-server
    ports:
      - "10051:10051"
    environment:
      - DB_SERVER_HOST=zabbix-db
      - POSTGRES_DB=zabbix
      - POSTGRES_USER=zabbix
      - POSTGRES_PASSWORD=zabbix
      - ZBX_SERVER_NAME=zabbix-server

  zabbix-web:
    image: zabbix/zabbix-web-nginx-pgsql:alpine-5.4-latest
    container_name: zabbix-web
    ports:
      - "8080:8080"
    environment:
      - DB_SERVER_HOST=zabbix-db
      - POSTGRES_DB=zabbix
      - POSTGRES_USER=zabbix
      - POSTGRES_PASSWORD=zabbix
      - ZBX_SERVER_NAME=zabbix-server
      - PHP_TZ=UTC

  zabbix-db:
    image: postgres:13-alpine
    container_name: zabbix-db
    environment:
      - POSTGRES_DB=zabbix
      - POSTGRES_USER=zabbix
      - POSTGRES_PASSWORD=zabbix
  • zabbix-server The service uses the official Docker image of Zabbix Server zabbix/zabbix-server-pgsql:alpine-5.4-latest. It is responsible for running the Zabbix server, listening on port 10051.
  • zabbix-web The service uses the official Docker image of Zabbix Web zabbix/zabbix-web-nginx-pgsql:alpine-5.4-latest. It is responsible for running the Zabbix web interface and listening on port 8080. Note that the configuration of this service needs to match the configuration of the zabbix-server service.
  • zabbix-db The service uses the official Docker image of PostgreSQL postgres:13-alpine. It serves as the database for Zabbix and connects Zabbix Server and Zabbix Web to it. Please note that the version here may need to be adjusted depending on the Zabbix version.

Create a file named in the ./zabbix directory to specify the initialization SQL file of the Zabbix database. docker-compose.override.yml

version: '3'
services:
  zabbix-db:
    volumes:
      - ./zabbix-db-init:/docker-entrypoint-initdb.d

In the ./zabbix-db-init directory, you can place a SQL file named schema.sql, which contains the initialization script for the Zabbix database.

9.4 Nagios

Nagios is an open source tool widely used for monitoring systems, networks, and infrastructure.

version: '3'
services:
  nagios:
    image: jasonrivers/nagios:latest
    container_name: nagios
    ports:
      - "8080:80"
    environment:
      - NAGIOSADMIN_USER=admin
      - NAGIOSADMIN_PASS=adminpassword

  nagios-plugins:
    image: nagios-plugins:latest
    container_name: nagios-plugins
    depends_on:
      - nagios
  • nagiosThe service uses the jasonrivers/nagios image, which contains Nagios Core and some commonly used plug-ins. It listens on port 8080, and you can access the Nagios web interface by visiting http://localhost:8080 .
    • NAGIOSADMIN_USERThe and NAGIOSADMIN_PASS environment variables set the username and password of the Nagios administrator user.
  • nagios-pluginsThe service uses the Nagios Plugins image. Nagios Plugins contains a series of plug-ins for monitoring various services and resources. This service depends on the nagios service, make sure Nagios Core is started.

Create a configuration file named in the ./nagios directory to configure the basic settings of Nagios. nagios.cfg

# Define a host for the local machine
define host {
    use                     linux-server
    host_name               localhost
    alias                   localhost
    address                 127.0.0.1
    max_check_attempts      5
    check_period            24x7
    notification_interval  30
    notification_period    24x7
}

# Define a service to check the disk space of the root partition
define service {
    use                     local-service
    host_name               localhost
    service_description     Root Partition
    check_command           check_local_disk!20%!10%!/
    max_check_attempts      5
    check_interval          5
    retry_interval          3
    check_period            24x7
    notification_interval  30
    notification_period    24x7
}

The configuration file defines a service that monitors localhost and root partition disk space.

10. CI/CD Tools

10.1 Jenkins

Jenkins is an open source automation server for building, testing, and deploying software.

version: '3'
services:
  jenkins:
    image: jenkins/jenkins:lts
    container_name: jenkins
    ports:
      - "8080:8080"
    volumes:
      - ./jenkins_home:/var/jenkins_home
    environment:
      - JAVA_OPTS=-Djenkins.install.runSetupWizard=false
      - JENKINS_OPTS=--prefix=/jenkins
    restart: always
  • jenkins The service uses the official Jenkins Long Term Support (LTS) version of the Docker image jenkins/jenkins:lts.
  • container_name specifies the container name jenkins.
  • portsThe port mapping relationship between the container and the host is defined, and the host's 8080 port is mapped to the container's 8080 port.
  • volumes Map the ./jenkins_home directory on the host to the container's /var/jenkins_home for storing Jenkins data.
  • environment sets the Jenkins environment variable, JAVA_OPTS disables the setup wizard at first run, JENKINS_OPTS sets the Jenkins path prefix to /jenkins.
  • restart: alwaysSet the container to always restart on exit.
10.2 Drone

Drone is an open source continuous integration and continuous delivery (CI/CD) platform that supports Docker containers.

version: '3'
services:
  drone-server:
    image: drone/drone:2
    container_name: drone-server
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./drone/server:/data
    environment:
      - DRONE_GITHUB_CLIENT_ID=YOUR_GITHUB_CLIENT_ID
      - DRONE_GITHUB_CLIENT_SECRET=YOUR_GITHUB_CLIENT_SECRET
      - DRONE_RPC_SECRET=YOUR_RPC_SECRET
      - DRONE_SERVER_HOST=your-drone-domain.com
      - DRONE_SERVER_PROTO=https
      - DRONE_USER_CREATE=username:your-github-username,admin:true
    restart: always

  drone-runner:
    image: drone/drone-runner-docker:2
    container_name: drone-runner
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - DRONE_RPC_PROTO=http
      - DRONE_RPC_SECRET=YOUR_RPC_SECRET
      - DRONE_RUNNER_CAPACITY=2
    restart: always
  • drone-serverIt is the server side of Drone, responsible for processing build and deployment requests.
  • drone-runnerIt is the runner of Drone and is responsible for executing build tasks in the container.
  • DRONE_GITHUB_CLIENT_ID and DRONE_GITHUB_CLIENT_SECRET are the client ID and secret for the OAuth application used to authenticate with GitHub. You need to create an OAuth application on GitHub and configure these two values ​​here.
  • DRONE_RPC_SECRETis the key used to encrypt data transmission. Make sure to generate a unique key for each Drone deployment.
  • DRONE_SERVER_HOSTis the domain name or IP address of your Drone service.
  • DRONE_SERVER_PROTO Specifies the protocol used by the Drone service, here set to https.
  • DRONE_USER_CREATE is used to create an administrator user on first login. You need to replace your-github-username with your GitHub username.

11. Code warehouse and version control services

GitLab

GitLab is an open source platform for managing code repositories, collaborative development, and continuous integration.

version: '3'
services:
  gitlab:
    image: gitlab/gitlab-ce:latest
    container_name: gitlab
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./gitlab/config:/etc/gitlab
      - ./gitlab/logs:/var/log/gitlab
      - ./gitlab/data:/var/opt/gitlab
    restart: always
  • gitlab The service uses the official Docker image of GitLab CE (Community Edition) gitlab/gitlab-ce:latest. This is a complete image containing all components of GitLab.
  • container_name specifies the container name gitlab.
  • portsThe port mapping relationship between the container and the host is defined, mapping the host's port 80 to the container's port 80 (HTTP) and mapping the host's port 443 to the container's port 443 (HTTPS).
  • volumesMap three directories on the host to the container, which are used to store GitLab configuration files, logs, and data respectively.
  • restart: alwaysSet the container to always restart on exit.

12. VPN and remote access services

12.1 OpenVPN

OpenVPN is an open source virtual private network (VPN) solution that allows secure connections to the Internet through encryption and tunneling technology.

version: '3'
services:
  openvpn:
    image: kylemanna/openvpn:2.6.1
    container_name: openvpn
    cap_add:
      - NET_ADMIN
    ports:
      - "1194:1194/udp"
    restart: always
    volumes:
      - ./openvpn-data:/etc/openvpn

networks:
  default:
    external:
      name: openvpn-net
  • image: OpenVPN mirror version used. In this example, kylemanna/openvpn:2.6.1 is used. Other versions can be selected as needed.
  • container_name: Specify the name of the container to facilitate referencing the container.
  • cap_add: Enable the NET_ADMIN permission so that the container can configure networking.
  • ports: Map OpenVPN default port to host.
  • restart: Configure the container's restart policy to ensure that the OpenVPN service is restarted when the container exits.
  • volumes: Map the OpenVPN data directory to the host to ensure that configuration files and certificates remain on the host after the container is stopped or deleted.

An OpenVPN configuration file and certificate need to be created before starting the service. You can use the easyrsa tool officially provided by OpenVPN to generate a certificate. The specific steps are as follows:

  1. Create and enter openvpn-data directory:

    cd openvpn-data
    
  2. Execute the following command to initialize the certificate:

    docker run -v $PWD:/etc/openvpn --rm kylemanna/openvpn:2.6.1 ovpn_genconfig -u udp://your_server_ip
    

    Replace your_server_ip with the public IP address of your server.

  3. Execute the following commands to generate the initial certificate and key:

    docker run -v $PWD:/etc/openvpn --rm -it kylemanna/openvpn:2.6.1 ovpn_initpki
    
  4. Start the OpenVPN service:

    docker-compose up -d
    
12.2 FRP

Frp (Fast Reverse Proxy) is an open source tool for reverse proxy. It can access local services through external servers to achieve intranet penetration and port mapping.

version: '3'
services:
  frps:
    image: snowdreamtech/frps:0.37.1
    container_name: frps
    ports:
      - "7000:7000"
      - "7500:7500"
    restart: always
    volumes:
      - ./frps.ini:/etc/frp/frps.ini

networks:
  default:
    external:
      name: frp-net
  • image: Frp image version used. In this example, snowdreamtech/frps:0.37.1 is used. Other versions can be selected as needed.
  • container_name: Specify the name of the container to facilitate referencing the container.
  • ports: Map Frp default port to host. Here, the 7000 port is used for the Frp console and the 7500 port is used for the Frp status page.
  • restart: Configure the container's restart policy to ensure that the Frp service is restarted when the container exits.
  • volumes: Map the Frp configuration file frps.ini to the host to ensure that the configuration remains on the host after the container is stopped or deleted.
  • networks: An external network frp-net is defined to ensure that containers can communicate with each other.

Next, you need to create the Frp configuration file frps.ini and place it in the same directory as the docker-compose.yml file.

[common]
bind_port = 7000
vhost_http_port = 80
dashboard_port = 7500
dashboard_user = admin
dashboard_pwd = admin

# 添加需要进行端口映射的配置
[ssh]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = 6000

[web]
type = http
local_ip = 127.0.0.1
local_port = 80
custom_domains = your-domain.com

Two example configurations are included, one for SSH port mapping and one for HTTP port mapping. It needs to be configured accordingly according to actual needs.

12.3 WireGuard

WireGuard is a fast, modern, and secure VPN protocol whose simplicity and performance advantages make it a popular choice.

version: '3'
services:
  wireguard:
    image: linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - PUID=1000  # 替换为你的用户ID
      - PGID=1000  # 替换为你的组ID
      - TZ=UTC
    volumes:
      - ./config:/config
    ports:
      - "51820:51820/udp"
    restart: always

networks:
  default:
    external:
      name: wireguard-net
  • image: WireGuard image to use. In this example, linuxserver/wireguard:latest is used. Other versions can be selected as needed.
  • container_name: Specify the name of the container to facilitate referencing the container.
  • cap_add: Enables NET_ADMIN and SYS_MODULE permissions so that the container can configure networking and load kernel modules.
  • environment: Set environment variables, including user ID (PUID), group ID (PGID) and time zone (TZ ). These environment variables may need to be adjusted depending on your system.
  • volumes: Map the WireGuard configuration directory to the host to ensure that the configuration remains on the host after the container is stopped or deleted.
  • ports: Map WireGuard default port to host. Here, the 51820 port is used for WireGuard VPN.
  • restart: Configure the container's restart policy to ensure that the WireGuard service is restarted when the container exits.
  • networks: An external network wireguard-net is defined to ensure that containers can communicate with each other.

WireGuard is a fast, modern, and secure VPN protocol whose simplicity and performance advantages make it a popular choice. Below is a complete and detailed configuration file for deploying WireGuard using Docker Compose.

First, you need to create a file called docker-compose.yml and add the following content to the file:

yamlCopy codeversion: '3'

services:
  wireguard:
    image: linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - PUID=1000  # 替换为的用户ID
      - PGID=1000  # 替换为的组ID
      - TZ=UTC
    volumes:
      - ./config:/config
    ports:
      - "51820:51820/udp"
    restart: always

networks:
  default:
    external:
      name: wireguard-net

This docker-compose.yml file defines a service named wireguard. Here is an explanation of the configuration:

  • image: WireGuard image to use. In this example, linuxserver/wireguard:latest is used. Other versions can be selected as needed.
  • container_name: Specify the name of the container to facilitate referencing the container.
  • cap_add: Enables NET_ADMIN and SYS_MODULE permissions so that the container can configure networking and load kernel modules.
  • environment: Set environment variables, including user ID (PUID), group ID (PGID) and time zone (TZ ). These environment variables may need to be adjusted depending on your system.
  • volumes: Map the WireGuard configuration directory to the host to ensure that the configuration remains on the host after the container is stopped or deleted.
  • ports: Map WireGuard default port to host. Here, the 51820 port is used for WireGuard VPN.
  • restart: Configure the container's restart policy to ensure that the WireGuard service is restarted when the container exits.
  • networks: An external network wireguard-net is defined to ensure that containers can communicate with each other.

Next, you need to create the WireGuard configuration file and key in the config directory. Create the config directory and enter it:

mkdir config
cd config

Generate the WireGuard configuration file and key using the following commands:

docker run -it --rm \
  -v $(pwd):/etc/wireguard \
  linuxserver/wireguard:latest \
  /app/newpeer.sh

Enter information as prompted, including public key, private key, IPv4 address, etc. This will generate the configuration file wg0.conf and the key file.

Finally, start the WireGuard service using the following command:

docker-compose up -d

If everything goes well, you should be able to connect to the WireGuard server through the WireGuard client. Make sure to check out the documentation for the image you're using for more configuration options and best practices.

13. Online documentation

Onlyoffice

OnlyOffice is an open source office suite with document editing, collaboration, and integration capabilities.

version: '3'  
services:  
  onlyoffice:  
    image: onlyoffice/documentserver:latest  
    container_name: onlyoffice-server  
    restart: always  
    ports:  
      - "8080:80"  
      - "8443:443"  
    volumes:  
      - ./onlyoffice/data:/var/www/onlyoffice/Data  
      - ./onlyoffice/config:/var/lib/onlyoffice/config  
    environment:  
      - ALLOW_ANONYMOUS=1  
      - MAX_UPLOAD_SIZE=20M  
      - CERT_PEM=/var/lib/onlyoffice/config/cert.pem  
      - KEY_PEM=/var/lib/onlyoffice/config/key.pem
  • services:: Specifies the list of services to be deployed.
  • onlyoffice:: Defines a service named onlyoffice, which uses a mirror of the OnlyOffice document server.
  • image: onlyoffice/documentserver:latest: Specify the OnlyOffice image name and version to be used. The latest version is used here.
  • container_name: onlyoffice-server: Specify the name of the container, here named "onlyoffice-server".
  • restart: always: Set the container to automatically restart after exiting.
  • ports:: Defines the port mapping of the container. Here, the container's port 80 is mapped to the host's port 8080, and the container's port 443 is mapped to the host's port 8443.
  • volumes:: Defines the mounting relationship between directories on the local file system and paths within the container. Here, the local "onlyoffice/data" directory is mounted to the "/var/www/onlyoffice/Data" path in the container, and the local "onlyoffice/config" directory is mounted to the "/var/lib/onlyoffice" path in the container. /config" path.
  • environment:: Defines environment variables and their corresponding values. Here are set to allow anonymous access (ALLOW_ANONYMOUS=1), the maximum upload size (MAX_UPLOAD_SIZE=20M), and the path to the SSL certificate and key (CERT_PEM and KEY_PEM).

14. Toolchain

14.1 Portainer

Portainer is an open source container management interface designed to simplify the deployment and management of Docker containers.

version: '3'

services:
  portainer:
    image: portainer/portainer-ce:latest
    command: --swarm
    ports:
      - "9000:9000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_data:/data
    deploy:
      placement:
        constraints:
          - node.role == manager

volumes:
  portainer_data:

portainer: This is a service definition using Portainer’s official Docker image portainer/portainer-ce:latest. This service is designed to run on Docker Swarm.

  • command: --swarm: This option tells Portainer to run in Swarm mode.
  • ports: - "9000:9000": This maps the container's port 9000 to the host's port 9000. You can access the Portainer web interface at http://localhost:9000 .
  • volumes: - /var/run/docker.sock:/var/run/docker.sock - portainer_data:/data: These volumes are used to map the Docker daemon's Unix sockets into the container so that Portainer can communicate with the Docker daemon and store Portainer data persistently in files named portainer_data in the volume.
  • deploy: This is a configuration block for Swarm deployment.
    • placement: Here, a constraint is used that requires the Portainer service to only run on the manager node in Swarm.
14.2 Weaveworks

Weave is an open source project for container networking that provides simple and easy-to-use networking solutions for Docker containers and container orchestration tools such as Kubernetes.

version: '3'
services:
  weave:
    image: weaveworks/weave:latest
    privileged: true
    network_mode: "host"
    environment:
      - WEAVER_DEBUG=1
    command: ["--local", "launch"]
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

defines a service named weave, using Weave’s official Docker image weaveworks/weave:latest.

  • privileged: true: This allows the container to gain access to system privileges, which is required in Weave.
  • network_mode: "host": This places the container in the host network namespace, allowing Weave to interact with the host network.
  • environment: - WEAVER_DEBUG=1: This environment variable is used to enable Weave's debugging mode.
  • command: ["--local", "launch"]: This is a parameter to the Weave start command that instructs Weave to start on the local node.
  • volumes: - /var/run/docker.sock:/var/run/docker.sock: This communicates the Docker client inside the container with the host's Docker daemon.
14.3 Vault

Vault is an open source tool from HashiCorp for securely managing and protecting sensitive information such as access tokens, passwords, database credentials, and more. Vault is typically used in conjunction with Consul, which stores cluster state and configuration information for Vault.

version: '3'
services:
  consul:
    image: consul:latest
    container_name: consul
    command: "agent -dev -client=0.0.0.0"
    ports:
      - "8500:8500"
    networks:
      - vault_network

  vault:
    image: vault:latest
    container_name: vault
    environment:
      - VAULT_DEV_ROOT_TOKEN_ID=myroot
      - VAULT_ADDR=http://127.0.0.1:8200
    ports:
      - "8200:8200"
    networks:
      - vault_network
    depends_on:
      - consul
    cap_add:
      - IPC_LOCK
    entrypoint: ["vault", "server", "-dev-listen-address", "0.0.0.0:8200"]

networks:
  vault_network:
    driver: bridge

In this configuration file, two services are defined: consul and vault.

Consul service

  • image: consul:latest: Use Consul’s official Docker image.
  • command: "agent -dev -client=0.0.0.0": Start the Consul agent, use development mode, and allow access from any address.
  • ports: - "8500:8500": Map the Consul Web UI port to the host's port 8500.
  • networks: - vault_network: Join Consul to the network named vault_network.

Vault service

  • image: vault:latest: Use Vault’s official Docker image.
  • environment: - VAULT_DEV_ROOT_TOKEN_ID=myroot - VAULT_ADDR=http://127.0.0.1:8200: Sets the Vault's environment variables, including the development mode root token and address.
  • ports: - "8200:8200": Map the Vault API port to the host's port 8200.
  • networks: - vault_network: Join the vault to the network named vault_network.
  • depends_on: - consul: Make sure Consul is started before starting Vault.
  • cap_add: - IPC_LOCK: Allow Vault to use IPC_LOCK permission.
  • entrypoint: ["vault", "server", "-dev-listen-address", "0.0.0.0:8200"]: Set the Vault startup command.

15. API Gateways and Service Mesh

15.1 Kong

Kong is an open source tool for building and managing API gateways.

version: '3'
services:
  kong-database:
    image: postgres:9.6
    container_name: kong-database
    environment:
      POSTGRES_DB: kong
      POSTGRES_USER: kong
      POSTGRES_PASSWORD: kong

  kong:
    image: kong:2.7.0
    container_name: kong
    depends_on:
      - kong-database
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: kong
      KONG_ADMIN_LISTEN: 0.0.0.0:8001
      KONG_PROXY_LISTEN: 0.0.0.0:8000
  • kong-databaseThe service uses a PostgreSQL image to store Kong configuration information.
  • kongThe service uses Kong's official Docker image and depends on the kong-database service.
  • In the environment variables of the kong service, the database type used by Kong, database connection information, and the listening address of the API gateway are configured.
15.2 For that

Istio is an open source service mesh platform for connecting, monitoring, and securing microservice applications.

version: '3'
services:
  istio-control-plane:
    image: docker.io/istio/pilot:1.11.0
    container_name: istio-control-plane
    ports:
      - "15010:15010"
      - "15011:15011"
      - "15012:15012"
      - "8080:8080"
      - "8443:8443"
      - "15014:15014"
    command: ["istiod"]
    environment:
      - PILOT_CERT_PROVIDER=istiod
    volumes:
      - /var/run/secrets/tokens

  istio-ingressgateway:
    image: docker.io/istio/proxyv2:1.11.0
    container_name: istio-ingressgateway
    ports:
      - "15020:15020"
      - "15021:15021"
      - "80:80"
      - "443:443"
      - "15090:15090"
    environment:
      - PILOT_AGENT_ADDR=istio-control-plane:15012
      - PILOT_XDS_ADDR=istio-control-plane:15010
      - GATEWAY_CERT_FILE=/etc/certs/cert-chain.pem
      - GATEWAY_KEY_FILE=/etc/certs/key.pem
      - GATEWAY_SDS_ENABLED=false
      - SERVICE_NAME=istio-ingressgateway
      - DOWNSTREAM_TLS_CONTEXT=default

  istio-sidecar-injector:
    image: docker.io/istio/proxyv2:1.11.0
    container_name: istio-sidecar-injector
    entrypoint: /usr/local/bin/istio-iptables.sh
    ports:
      - "15001:15001"
      - "15006:15006"
    environment:
      - PILOT_AGENT_ADDR=istio-control-plane:15012
      - PILOT_XDS_ADDR=istio-control-plane:15010
      - CA_ADDR=istio-control-plane:15006
      - CA_PROVIDER=Citadel
      - CA_CERT_PATH=/etc/certs/cert-chain.pem
      - CA_KEY_PATH=/etc/certs/key.pem
    volumes:
      - /etc/certs

  bookinfo-app:
    image: docker.io/istio/examples-bookinfo-details-v1:1.11.0
    container_name: bookinfo-app
    ports:
      - "9080:9080"
  • istio-control-planeThe service uses the Istio Pilot image for the Istio control plane.
  • istio-ingressgatewayThe service uses an Istio ProxyV2 image to handle ingress traffic.
  • istio-sidecar-injectorThe service also uses the Istio ProxyV2 image, which is used to inject the Envoy proxy into the application container.
  • bookinfo-appThe service uses the Bookinfo application from the Istio sample to demonstrate Istio functionality.
15.3 Left

Linkerd is an open source tool for building and managing service meshes.

version: '3'
services:
  linkerd:
    image: buoyantio/linkerd:stable-2.11.0
    container_name: linkerd
    ports:
      - "4191:4191"
      - "9990:9990"
    command: ["linkerd", "run"]

  linkerd-viz:
    image: buoyantio/linkerd-viz:stable-2.11.0
    container_name: linkerd-viz
    ports:
      - "8084:8084"
      - "8086:8086"
    environment:
      - LINKERD_VIZ_K8S_API_URL=http://linkerd:4191
      - LINKERD_VIZ_PUBLIC_PORT=8084
      - LINKERD_VIZ_RPC_PORT=8086
  • services: This is the main part of a Docker Compose file that defines the various services that need to be run.
    • linkerd: This is the Linkerd control plane and data plane service. Use buoyantio/linkerd:stable-2.11.0 mirroring, listening on port 4191 for control plane communications, and port 9990 for the Linkerd console. The command field specifies the command to run when the container starts.
    • linkerd-viz: This is a service of Linkerd Viz, used to visualize the running status of Linkerd. Using the buoyantio/linkerd-viz:stable-2.11.0 mirror, listen on port 8084 for the visual interface, and port 8086 for Linkerd Viz's RPC.
  • image: Specify the Docker image used by the service.
  • container_name: Specifies the name of the service container.
  • ports: Define the port mapping relationship between the container and the host.
  • command: Specifies the command to run when the container starts. Here, use Linkerd’s linkerd run command.
  • environment: Used to set environment variables. This is used to configure Linkerd Viz-related environment variables, including the address and port of the Linkerd control plane.
15.4 Traffic

Traefik is an open source tool for reverse proxy and load balancing, often used to provide external access to containerized applications.

version: '3'
services:
  traefik:
    image: traefik:v2.5
    container_name: traefik
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
    ports:
      - "80:80"
      - "8080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  • traefikThe service uses Traefik’s official Docker image for running the Traefik reverse proxy.
  • commandThe field specifies the startup parameters of Traefik, where --api.insecure=true enables Traefik's API, and --providers.docker=true enables Docker as Traefik's backend service provider.
  • portsThe port mapping relationship between the container and the host is defined, including mapping port 80 of the host to port 80 of the container, and mapping port 8080 of the host to port 8080 of the container.
  • volumesMap the host's Docker Socket into the container so that Traefik can listen to Docker events and dynamically update the configuration.

16. Testing Tools

16.1 JMeter

JMeter (Apache JMeter) is an open source tool for performance testing and load testing.

version: '3'
services:
  jmeter:
    image: justb4/jmeter:latest
    container_name: jmeter
    volumes:
      - ./jmeter:/jmeter
    command: -n -t /jmeter/test.jmx -l /jmeter/results.jtl

defines a service named jmeter, using JMeter’s official Docker image justb4/jmeter:latest.

  • volumes: - ./jmeter:/jmeter: This mounts the jmeter directory in the current directory into the container to store JMeter test plan files and test result files.
  • command: -n -t /jmeter/test.jmx -l /jmeter/results.jtl: This is the parameter of the JMeter startup command, where -n indicates non-GUI mode, -t specifies the test plan file, -l Specify the result file.

Create a JMeter test plan file named in the ./jmeter directory. This file can be created using the JMeter GUI or written directly using a text editor. test.jmx

16.2 Locust

Locust is an open source tool for performing load testing that uses Python scripts to define user behavior and test logic.

version: '3'
services:
  locust:
    image: locustio/locust:latest
    container_name: locust
    command: -f /locust/locustfile.py --host http://target-service
    ports:
      - "8089:8089"
    volumes:
      - ./locust:/locust

defines a service named locust, using Locust’s official Docker image locustio/locust:latest.

  • command: -f /locust/locustfile.py --host http://target-service: This is the parameter of the Locust startup command, where -f specifies the Locust file and --host specifies the address of the target service to be tested.
  • ports: - "8089:8089": Exposes the Locust web interface to the host's port 8089.
  • volumes: - ./locust:/locust: Mount the locust directory in the current directory into the container to store the Locust script file.

Create a Locust script file named in the ./locust directory. This file defines user behavior and test logic. locustfile.py

from locust import HttpUser, task, between

class MyUser(HttpUser):
    wait_time = between(1, 3)

    @task
    def my_task(self):
        self.client.get("/")

17. Reverse proxy and load balancing services

17.1 Nginx

Nginx is a high-performance open source web server and reverse proxy server.

version: '3'
services:
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/html:/usr/share/nginx/html

defines a service named nginx, using Nginx’s official Docker image nginx:latest.

  • ports: - "80:80": Map port 80 of the host to port 80 of the container so that Nginx can be accessed through port 80 of the host.
  • volumes: - ./nginx/conf:/etc/nginx/conf.d - ./nginx/html:/usr/share/nginx/html: These two volumes mount the nginx/conf and nginx/html directories of the host to the corresponding directories in the container respectively. This enables custom configuration and serving custom static content.

Create an Nginx configuration file named in the ./nginx/conf directory. default.conf

server {
    listen 80;
    server_name localhost;

    location / {
        root /usr/share/nginx/html;
        index index.html;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

This configuration file defines a simple Nginx server, listens to port 80 of the host, maps requests to the /usr/share/nginx/html directory, and defines the processing of error pages.

Create a static HTML file named in the ./nginx/html directory. HTML content can be written as required. index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Welcome to My Nginx Server</title>
    <style>
        body {
            font-family: 'Arial', sans-serif;
            text-align: center;
            margin: 100px;
        }

        h1 {
            color: #333;
        }

        p {
            color: #666;
        }
    </style>
</head>
<body>
    <h1>Welcome to My Nginx Server</h1>
    <p>This is a simple HTML page served by Nginx.</p>
</body>
</html>
17.2 HAProxy

HAProxy (High Availability Proxy) is a popular open source load balancer and reverse proxy server.

version: '3'
services:
  haproxy:
    image: haproxy:latest
    container_name: haproxy
    ports:
      - "80:80"
    volumes:
      - ./haproxy:/usr/local/etc/haproxy
    networks:
      - webnet
    depends_on:
      - web1
      - web2

  web1:
    image: httpd:latest
    container_name: web1
    networks:
      - webnet

  web2:
    image: nginx:latest
    container_name: web2
    networks:
      - webnet

networks:
  webnet:
  • image: haproxy:latest: Use HAProxy official Docker image.
  • container_name: haproxy: Set the container name to haproxy.
  • ports: - "80:80": Map port 80 of the host to port 80 of the container, allowing access to HAProxy through port 80 of the host.
  • volumes: - ./haproxy:/usr/local/etc/haproxy: Mount the haproxy directory of the host to the /usr/local/etc/haproxy directory in the container to store the HAProxy configuration file.
  • networks: - webnet: Connect the container to a custom network named webnet.
  • depends_on: - web1 - web2: Indicates that the haproxy service depends on the web1 and web2 services. Make sure to start the haproxy Start these two services before.

web1 Services and web2 Services:

  • image: httpd:latest and image: nginx:latest: Use Apache HTTP Server and Nginx official Docker images respectively.
  • container_name: web1container_name: web2: Name of the installation container web1web2.
  • networks: - webnet: Connect these two services to a custom network named webnet.
  • Define a custom network named webnet to connect haproxy, web1, and web2 Services.

Create a HAProxy configuration file named in the ./haproxy directory. haproxy.cfg

global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend web
    bind *:80
    mode http
    default_backend app_servers

backend app_servers
    mode http
    balance roundrobin
    option httpchk HEAD / HTTP/1.1\r\nHost:localhost
    server web1 web1:80 check
    server web2 web2:80 check

The configuration file defines a simple HAProxy configuration that listens on port 80 of the host and distributes traffic to two backend servers (web1 and web2).

Guess you like

Origin blog.csdn.net/LSW1737554365/article/details/134737384