[Docker] MySQL container scheduled backup

We usually use native mysql more, and mysql backup is also familiar. Suppose there is a mysql database whose username is root and password is 123456, and now you want to export the data whose schema is db1 and db2.
When exporting locally, we can pass

mysqldump -uroot -p123456 --databases db1 db2 > /data_backup/xxx.sql

Export the data whose schema is db1 and db2 to the file whose path is /data_backup/xxx.sql.
But if mysql is a docker image, how should it be exported?
Many students may think of using

docker exec -it $container_name mysqldump -uroot -p123456 --databases db1 db2 > /data_backup/xxx.sql

to export. But you will find that you can get the data backup of xxx.sql by directly using the above statement to export, but if you execute the statement on the timer, you will get an empty xxx.sql file.
Why?
Usually, we think that the statement or script that can be executed can also be executed by the timer, and the timer is just a triggered action.
Yes, there is nothing wrong, but the problem lies in how to trigger the execution.
When we execute it manually, we use docker exec -it to create a pseudo-terminal to execute mysqldump, so it is feasible.
However, when the timer is automatically triggered, it is originally in the terminal, so there is no need to create a pseudo-terminal through -it, because the latter does not have the environment variables of the host machine, so an error will be reported.
So the correct command is:

docker exec -i $container_name mysqldump -uroot -p123456 --databases db1 db2 > /data_backup/xxx.sql

Just remove the t.

Guess you like

Origin blog.csdn.net/kida_yuan/article/details/128985180
Recommended