I'm working on a Docker application that uses MySQL to store very large databases (that's due to legacy reasons). This one is installed on the host machine.
Today I was doing common jobs that I've been doing for 1 or 2 months, and all of a sudden I can't communicate anymore with my database.
The uri of the db has been the same for months: jdbc:mysql://localhost:3306/dbname?verifyServerCertificate=false&useSSL=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
As soon as i try to connect to the database from my application that runs on the Virtual Machine I receive:
java.sql.SQLException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
...
Caused by: java.net.ConnectException: Connection timed out
The guilty code is this:
private static boolean checkForExistence(String dbName) {
boolean exists = false;
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(JDBC_DRIVER); //com.mysql.cj.jdbc.Driver
dataSource.setUrl(this.url); //that's the uri provided before
dataSource.setUsername(USER);
dataSource.setPassword(PASS);
Connection conn = null;
Statement stmnt = null;
ResultSet dbs = null;
try {
conn = dataSource.getConnection(); //And this is the line that triggers the exception.
System.out.println(conn!=null);
dbs = conn.getMetaData().getCatalogs();
while(dbs.next() && !exists)
if(dbs.getString(1).equals(dbName))
exists = true;
} catch (SQLException e) {
e.printStackTrace();
} finally {
try {
if(conn != null)
conn.close();
if(stmnt != null)
stmnt.close();
if(dbs != null)
dbs.close();
if(dataSource != null)
dataSource.close();
} catch (SQLException e) {
logger.error(e.getMessage());
return exists;
}
}
return exists;
}
I tried the following things:
- From my local machine:
telnet <public ip of the VM> 3306
-> Connection timed out - From my local machine:
mysql -h<public ip of the VM> -u user -p
->ERROR 2003 (HY000): Can't connect to MySQL server on 'remoteHostname.com' (110)
- From inside the VM:
nc localhost 3306
-> I can get in. - From inside the VM:
mysql -h127.0.0.1 -u user -p
-> I can get in. - From inside the VM:
mysql -h<public ip address> -u user -p
-> I can get in.
If I run netstat -tulpn | grep 3306
i got:
tcp6 0 0 :::3306 :::* LISTEN -
tcp6 0 0 :::33060 :::* LISTEN -
This is my.cnf
:
[mysqld]
default_authentication_plugin=mysql_native_password
datadir=/local/user/mysql
socket=/var/lib/mysql/mysql.sock
general_log_file=/var/log/mysql.log
general_log=1
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
I can connect to the db through PhpMyAdmin, but I think this is due to the fact that's using a Socket connection.
If I check both /var/log/mysql.log
and /var/log/mysqld.log
(this is the one about the errors) there's no evidence of anything. Basically when I try to connect those don't display anything.
EDIT: If i use the same settings to connect to my local machine instance of mysql everything works fine. So could this be related to a networking problem? Or VM's instane of mysql doen't accept anymore tcp connection for some reason?
RE-EDIT: Checking in /var/log/messages
I found out that the host machine went out of disk space. From that time on, the interface docker0
went on a blocked state
and then disabled state
loop.
Configuration:
- Remote VM with RHEL7
- 32 cores CPU
- 64GB RAM
- MySQL 8.0.13 - MySQL Community Server - GPL
- PhpMyAdmin 4.8.4
In the Re-edit section of the question i specified new details. I can't still run the application correctly with the old configuration but using --network=host
parameter in docker run
did the trick.
That's simply a workaround. I'll update this question with further informations.