A couple of days ago we ran into the infamous “too many open files” when our Tomcat web server was under load. There are several blogs around the internet that tries to deal with this issue but none of them seemed to do the trick for us. Usually what you do is to set the ulimit to a greater value (it’s something like 1024 by default). But in order to make it permanent after reboot the first thing suggested is to update the /proc/sys/fs/file-max
file and increase the value then edit the /etc/security/limits.conf
and add the following line * - nofile 2048
(see here for more details). But none of this worked for us. We saw that when doing
cat /proc/<tomcat pid>/limits
the limit was still set to the initial value of 1024:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 63810 63810 processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 63810 63810 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
It was not until we found
this
thread that the reason and solution became clear. Our Tomcat instance was started as a service during boot and there’s a
bug
discovered and filed (with patch) in 2005 that doesn’t seem to have been resolved yet. The bug reveals itself by ignoring the max number of open files limit when starting daemons in Ubuntu/Debain. So the work-around suggested by “BOK” was to edit
/etc/init.d/tomcat
and add:
ulimit -Hn 16384
ulimit -Sn 16384
Finally the max number of open files for Tomcat was increased!
摘自:https://blog.jayway.com/2012/02/11/how-to-really-fix-the-too-many-open-files-problem-for-tomcat-in-ubuntu/