java.net.BindException: Address already in use解

    在网络编程中,特别是在短时间内new的网络连接太多,经常出现java.net.bindexception: addressalready in use:jvm_bind的异常,网络有很多介绍此异常的,通常都是在说是要使用的端口被别的程序已经使用,但有时并不是这个原因,通过仔细查找,找到一些很好的资料,在此将其一一记录下来。<br> <br> <br>**********************************************************************************<br>文章一<br> <br>  短时间内new socket操作过多<br>  而socket.close()操作并不能立即释放绑定的端口<br>  而是把端口设置为time_wait状态<br>  过段时间(默认240s)才释放(用netstat -na可以看到)<br>  最后系统资源耗尽<br>  (windows上是耗尽了pool of ephemeral ports这段区间在1024-5000之间)<br> socket faq:<br>  remember that tcp guarantees all datatransmitted will be delivered,<br>if at all possible. when you close a socket, the server goes intoa<br>time_wait state, just to be really really sure that all the datahas<br>gone through. when a socket is closed, both sides agree bysending<br>messages to each other that they will send no more data. this,it<br>seemed to me was good enough, and after the handshaking is done,the<br>socket should be closed. the problem is two-fold. first, there isno<br>way to be sure that the last ack was communicatedsuccessfully.<br>second, there may be "wandering duplicates" left on the net thatmust<br>be dealt with if they are delivered.
andrew gierth ( <span style="color: #336699;">[email protected]</span>) helped toexplain the<br>closing sequence in the following usenet posting:
assume that a connection is in established state, and the clientis<br>about to do an orderly release. the client's sequence no. is sc,and<br>the server's is ss. client server<br>====== ======<br>established established<br>(client closes)<br>established established<br> <br>resolution<br>warning serious problems might occur if you modify the registryincorrectly by using registry editor or by using another method.these problems might require that you reinstall your operatingsystem. microsoft cannot guarantee that these problems can besolved. modify the registry at your own risk.<br>the default maximum number of ephemeral tcp ports is 5000 in theproducts that are included in the 'applies to' section. a newparameter has been added in these products. to increase the maximumnumber of ephemeral ports, follow these steps: 1. start registryeditor.<br>2. locate the following subkey in the registry, and then clickparameters:<br>hkey_local_machine\system\currentcontrolset\services\tcpip\parameters<br>3. on the edit menu, click new, and then add the following registryentry:<br>value name: maxuserport<br>value type: dword<br>value data: 65534<br>valid range: 5000-65534 (decimal)<br>default: 0x1388 (5000 decimal)<br>description: this parameter controls the maximum port number thatis used when a program requests any available user port from thesystem. typically , ephemeral (short-lived) ports are allocatedbetween the values of 1024 and 5000 inclusive.<br>4. quit registry editor.
note an additional tcptimedwaitdelay registry parameterdetermines how long a closed port waits until the closed port canbe reused.
原文连接: <span style="color: #336699;">http://blog.chinaunix.net/u/29553/showart_450701.html</span>
**********************************************************************************<br>文章二
java.net.bindexception: address already in use: connect的问题<br>大概原因是短时间内newsocket操作很多,而socket.close()操作并不能立即释放绑定的端口,而是把端口设置为time_wait状态,过段时间(默认240s)才释放,(用netstat-na可以看到),最后系统资源耗尽(windows上是耗尽了pool of ephemeral ports,这段区间在1024-5000之间; )
避免出现这一问题的方法有两个,一个是调高你的web服务器的最大连接线程数,调到1024,2048都还凑合,以resin为例,修改resin.conf中的thread-pool.thread_max,如果你采用apache连resin的架构,别忘了再调整apache;
另一个是修改运行web服务器的机器的操作系统网络配置,把time wait的时间调低一些,比如30s。<br>在red hat上,查看有关的选项,<br>[xxx@xxx~]$ /sbin/sysctl -a|grep net.ipv4.tcp_tw<br>net.ipv4.tcp_tw_reuse = 0<br>net.ipv4.tcp_tw_recycle = 0<br>[xxx@xxx~]$vi /etc/sysctl,修改<br>net.ipv4.tcp_tw_reuse = 1<br>net.ipv4.tcp_tw_recycle = 1<br>[xxx@xxx~]$sysctl -p,使内核参数生效
<br>socket-faq中的这一段讲time_wait的,摘录如下:<br>2.7. please explain the time_wait state.
remember that tcp guarantees all data transmitted will bedelivered,<br>if at all possible. when you close a socket, the server goes intoa<br>time_wait state, just to be really really sure that all the datahas<br>gone through. when a socket is closed, both sides agree bysending<br>messages to each other that they will send no more data. this,it<br>seemed to me was good enough, and after the handshaking is done,the<br>socket should be closed. the problem is two-fold. first, there isno<br>way to be sure that the last ack was communicatedsuccessfully.<br>second, there may be "wandering duplicates" left on the net thatmust<br>be dealt with if they are delivered.
andrew gierth ( <span style="color: #336699;">[email protected]</span>) helped toexplain the<br>closing sequence in the following usenet posting:
assume that a connection is in established state, and the clientis<br>about to do an orderly release. the client's sequence no. is sc,and<br>the server's is ss. client server<br>====== ======<br>established established<br>(client closes)<br>established established<br>------->><br>fin_wait_1<br><<--------<br>fin_wait_2 close_wait<br><<-------- (server closes)<br>last_ack<br>, ------->><br>time_wait closed<br>(2*msl elapses...)<br>closed
note: the +1 on the sequence numbers is because the fin countsas one<br>byte of data. (the above diagram is equivalent to fig. 13 fromrfc<br>793).
now consider what happens if the last of those packets isdropped in<br>the network. the client has done with the connection; it has nomore<br>data or control info to send, and never will have. but the serverdoes<br>not know whether the client received all the data correctly;that's<br>what the last ack segment is for. now the server may or may notcare<br>whether the client got the data, but that is not an issue for tcp;tcp<br>is a reliable rotocol, and must distinguish between anorderly<br>connection close where all data is transferred, and a connectionabort<br>where data may or may not have been lost.
so, if that last packet is dropped, the server will retransmitit (it<br>is, after all, an unacknowledged segment) and will expect to seea<br>suitable ack segment in reply. if the client went straight toclosed,<br>the only possible response to that retransmit would be a rst,which<br>would indicate to the server that data had been lost, when in factit<br>had not been.
(bear in mind that the server's fin segment may, additionally,contain<br>data.)
disclaimer: this is my interpretation of the rfcs (i have readall the<br>tcp-related ones i could find), but i have not attempted toexamine<br>implementation source code or trace actual connections in orderto<br>verify it. i am satisfied that the logic is correct, though.
more commentarty from vic:
the second issue was addressed by richard stevens ( <span style="color: #336699;">[email protected]</span>,<br>author of "unix network programming", see ``1.5 where can i getsource<br>code for the book [book title]?''). i have put together quotesfrom<br>some of his postings and email which explain this. i havebrought<br>together paragraphs from different postings, and have made asfew<br>changes as possible.
from richard stevens ( <span style="color: #336699;">[email protected]</span>):
if the duration of the time_wait state were just to handle tcp'sfull-<br>duplex close, then the time would be much smaller, and it wouldbe<br>some function of the current rto (retransmission timeout), not themsl<br>(the packet lifetime).
a couple of points about the time_wait state.
o the end that sends the first fin goes into the time_waitstate,<br>because that is the end that sends the final ack. if theother<br>end's fin is lost, or if the final ack is lost, having the endthat<br>sends the first fin maintain state about the connectionguarantees<br>that it has enough information to retransmit the final ack.
o realize that tcp sequence numbers wrap around after 2**32bytes<br>have been transferred. assume a connection between a.1500 (hosta,<br>port 1500) and b.2000. during the connection one segment islost<br>and retransmitted. but the segment is not really lost, it isheld<br>by some intermediate router and then re-injected into thenetwork.<br>(this is called a "wandering duplicate".) but in the timebetween<br>the packet being lost &amp; retransmitted, and thenreappearing, the<br>connection is closed (without any problems) and then another<br>connection is established between the same host, same port(that<br>is, a.1500 and b.2000; this is called another "incarnation" ofthe<br>connection). but the sequence numbers chosen for the new<br>incarnation just happen to overlap with the sequence number ofthe<br>wandering duplicate that is about to reappear. (this isindeed<br>possible, given the way sequence numbers are chosen for tcp<br>connections.) bingo, you are about to deliver the data fromthe<br>wandering duplicate (the previous incarnation of the connection)to<br>the new incarnation of the connection. to avoid this, you donot<br>allow the same incarnation of the connection to bereestablished<br>until the time_wait state terminates.
even the time_wait state doesn't complete solve the secondproblem,<br>given what is called time_wait assassination. rfc 1337 hasmore<br>details.
o the reason that the duration of the time_wait state is 2*mslis<br>that the maximum amount of time a packet can wander around a<br>network is assumed to be msl seconds. the factor of 2 is forthe<br>round-trip. the recommended value for msl is 120 seconds, but<br>berkeley-derived implementations normally use 30 secondsinstead.<br>this means a time_wait delay between 1 and 4 minutes. solaris2.x<br>does indeed use the recommended msl of 120 seconds.
a wandering duplicate is a packet that appeared to be lost andwas<br>retransmitted. but it wasn't really lost ... some router had<br>problems, held on to the packet for a while (order of seconds,could<br>be a minute if the ttl is large enough) and then re-injects thepacket<br>back into the network. but by the time it reappears, theapplication<br>that sent it originally has already retransmitted the datacontained<br>in that packet.
because of these potential problems with time_waitassassinations, one<br>should not avoid the time_wait state by setting the so_lingeroption<br>to send an rst instead of the normal tcp connectiontermination<br>(fin/ack/fin/ack). the time_wait state is there for a reason;it's<br>your friend and it's there to help you :-)
i have a long discussion of just this topic in myjust-released<br>"tcp/ip illustrated, volume 3". the time_wait state is indeed, oneof<br>the most misunderstood features of tcp.
i'm currently rewriting "unix network programming" (see ``1.5where<br>can i get source code for the book [book title]?''). and willinclude<br>lots more on this topic, as it is often confusing andmisunderstood.
an additional note from andrew:
closing a socket: if so_linger has not been called on a socket,then<br>close() is not supposed to discard data. this is true on svr4.2(and,<br>apparently, on all non-svr4 systems) but apparently not on svr4;the<br>use of either shutdown() or so_linger seems to be required to<br>guarantee delivery of all data.
原文连接: <span style="color: #336699;">http://hi.baidu.com/w_ge/blog/item/105877c6a361df1b9c163d21.html</span>
************************************************************************
文章三<br> <br>当您尝试从 tcp 端口大于 5000 连接收到错误 ' wsaenobufs (10055) '<br>症状<br>如果您尝试建立 tcp 连接从端口是大于 5000, 本地计算机响应并以下 wsaenobufs (10055)错误信息:<br>因为系统缺乏足够缓冲区空间或者因为队列已满无法执行套接字上操作。<br>解决方案<br>要点 此部分, 方法或任务包含步骤告诉您如何修改注册表。 但是, 如果修改注册表错误可能发生严重问题。 因此, 确保仔细执行这些步骤。用于添加保护之前, 修改备份注册表。 然后, 在发生问题时还原注册表。 有关如何备份和还原注册表, 请单击下列文章编号以查看microsoft 知识库中相应:<br>默认最大数量的短暂 tcp 端口为 5000 ' 适用于 ' 部分中包含产品中。 这些产品中已添加新参数。 要增加最大值是短暂端口,请按照下列步骤操作:<br>1. 启动注册表编辑器。 <br>2. 注册表, 中找到以下子项, 然后单击 参数 :<br>hkey _ local _machine\system\currentcontrolset\services\tcpip\parameters<br>3. 在 编辑 菜单, 单击 新建 , 然后添加以下注册表项:<br>maxuserport 值名称:<br>值类型: dword<br>值数据: 65534<br>有效范围: 5000 - 65534 (十进制)<br>默认: 0x1388 5000 (十进制)<br>说明: 此参数控制程序从系统请求任何可用用户端口时所用最大端口数。 通常, 1024 的值和含 5000 之间分配临时 (短期)端口。 <br>4. 退出注册表编辑器, 并重新启动计算机。 
注意 一个附加 tcptimedwaitdelay 注册表参数决定多久关闭端口等待可以重用关闭端口。<br> <br>对应英文原文为:<br> <br>symptoms<br>if you try to set up tcp connections from ports that are greaterthan 5000, the local computer responds with the followingwsaenobufs (10055) error message:<br>an operation on a socket could not be performed because the systemlacked sufficient buffer space or because a queue was full.<br>resolution<br>important this section, method, or task contains steps that tellyou how to modify the registry. however, serious problems mightoccur if you modify the registry incorrectly. therefore, make surethat you follow these steps carefully. for added protection, backup the registry before you modify it. then, you can restore theregistry if a problem occurs. for more information about how toback up and restore the registry, click the following articlenumber to view the article in the microsoft knowledge base:<br>322756 ( <span style="color: #336699;">http://support.microsoft.com/kb/322756/</span>) howto back up and restore the registry in windows
<br>the default maximum number of ephemeral tcp ports is 5000 in theproducts that are included in the 'applies to' section. a newparameter has been added in these products. to increase the maximumnumber of ephemeral ports, follow these steps:<br>1. start registry editor. <br>2. locate the following subkey in the registry, and then clickparameters:<br>hkey_local_machine\system\currentcontrolset\services\tcpip\parameters<br>3. on the edit menu, click new, and then add the following registryentry:<br>value name: maxuserport<br>value type: dword<br>value data: 65534<br>valid range: 5000-65534 (decimal)<br>default: 0x1388 (5000 decimal)<br>description: this parameter controls the maximum port number thatis used when a program requests any available user port from thesystem. typically , ephemeral (short-lived) ports are allocatedbetween the values of 1024 and 5000inclusive. <br>4. exit registry editor, and then restart thecomputer. 
note an additional tcptimedwaitdelay registry parameterdetermines how long a closed port waits until the closed port canbe reused.<br> <br>原文连接: <span style="color: #336699;">http://support.microsoft.com/kb/q196271/</span>
 

猜你喜欢

转载自qqchinaok.iteye.com/blog/1149543