Linux@ulimit

报错: 

2020-01-05 12:58:48,019 INFO  [ProcedureExecutor-2] master.AssignmentManager: Unable to communicate with hadoop,16020,1578200311421 in order to assign regions,
java.io.IOException: java.io.IOException: unable to create new native thread
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2457)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)
        at org.apache.hadoop.hbase.executor.ExecutorService$Executor.submit(ExecutorService.java:230)
        at org.apache.hadoop.hbase.executor.ExecutorService.submit(ExecutorService.java:154)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1843)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22737)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        ... 3 more

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:871)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1846)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:3044)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2991)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.assign(ServerCrashProcedure.java:568)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:270)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:75)
        at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
        at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:506)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1167)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:955)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:908)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:77)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:482)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: unable to create new native thread
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2457)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)
        at org.apache.hadoop.hbase.executor.ExecutorService$Executor.submit(ExecutorService.java:230)
        at org.apache.hadoop.hbase.executor.ExecutorService.submit(ExecutorService.java:154)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1843)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22737)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        ... 3 more

        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:386)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
        at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
        at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
        at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:600)
        at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:334)
        at java.lang.Thread.run(Thread.java:748)

分析:

其实这个问题主要是由于Linu系统的配置问题导致的,关键的报错在于不能够创建新的本地线程,所以需要对Linux系统本身进行修改,如下:

对文件/etc/security/limits.d/90-nproc.conf进行修改,如下:

# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

hadoop		soft	nofile		4096
hadoop		soft	nproc		65535

然后使用ulimit -a进行检查,可以看到max number of open file descriptors和max number of processes都已经设置了最大

 

之后,再去启动相关的Java进程就不会报错了

发布了54 篇原创文章 · 获赞 19 · 访问量 5万+

猜你喜欢

转载自blog.csdn.net/DataIntel_XiAn/article/details/103845475
今日推荐