hbase并发问题

2016-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: 打开的文件过多
2016-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_3790131629645188816_18192
2016-06-01 16:13:01,966 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_-299035636445663861_7843 file=/hbase/SendReport/83908b7af3d5e3529e61b870a16f02dc/data/17703aa901934b39bd3b2e2d18c671b4.9a84770c805c78d2ff19ceff6fecb972
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
     at java.io.DataInputStream.readBoolean(DataInputStream.java:242)
     at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:116)
     at org.apache.hadoop.hbase.io.Reference.read(Reference.java:149)
     at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:216)
     at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:282)
     at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
     at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2510)
     at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:449)
     at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3228)
     at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3176)
     at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
     at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
     at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
     at java.lang.Thread.run(Thread.java:722)
原因及修改方法:由于 Linux系统最大可打开文件数一般默认的参数值是1024,通过 ulimit -n 65535 可即时修改,但重启后就无效了。或者有如下修改方式:
有如下三种修改方式:
1.在/etc/rc.local 中增加一行 ulimit -SHn 65535
2.在/etc/profile 中增加一行 ulimit -SHn 65535
3.在/etc/security/limits.conf最后增加如下两行记录
* soft nofile 65535
* hard nofile 65535

猜你喜欢

转载自my.oschina.net/Allenbigdata/blog/1636100