java.net.UnknownHostException找不到主机[已解决]

问题:java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: “master”:9000; java.net.UnknownHostException;

log错误日志

    2017-07-13 21:26:45,915 FATAL [master:16000.activeMasterManager] master.HMaster: Failed to become active master
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see:  http://wiki.apache.org/hadoop/UnknownHost
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744)
    at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException
    ... 32 more
2017-07-13 21:26:45,924 FATAL [master:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "master":9000; java.net.UnknownHostException; For more details see:  http://wiki.apache.org/hadoop/UnknownHost
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744)
    at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:409)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1518)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:666)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2596)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException
    ... 32 more
2017-07-13 21:26:45,925 INFO  [master:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.

问题补充

1、防火墙均已关闭、root最高权限
2、hadoop启动正常jps查看已启动,通过浏览器访问50070,8088无任何问题
3、Zookeeper启动正常jps查看已启动
4、已删除hbase/lib下所有关于hadoop的jar并将 hadoop/share所有关于Hadoop的jar拷贝到hbase/lib下,并添加aws-java-sdk-core-1.11.158.jar和aws-java-sdk-s3-1.11.155.jar

版本说明

1、hadoop 2.7.2
2、hbase 1.2.6
3、zookeeper 3.4.2

/etc/hosts 配置文件

192.168.1.151 master
192.168.1.152 slave1
192.168.1.153 slave2

Hadoop配置

core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
  <property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hdf/data</value>
<final>true</final>
  </property>
  <property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hdf/name</value>
<final>true</final>
  </property>
</configuration>

mapred-site.xml

<configuration>
   <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
  </property>
  <property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
  </property>
  <property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
  </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
   <property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.mapred.ShuffleHandler</value>
  </property>
  <property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
  </property>
  <property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
  </property>
  <property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
  </property>
  <property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
  </property>
  <property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
  </property>
</configuration>

slaves

slave1
slave2

zookeeper配置

zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/zookeeper/zookeeper-data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.0=master:2888:3888
server.1=slave1:2888:3888
server.2=slave2:2888:3888

myid:三个主机分别是0,1,2

Hbase配置

hbase-site.xml

<configuration>
  <property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
  </property>
  <property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
  </property>
  <property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
  </property>
  <property>
  <name>hbase.master</name>
  <value>hdfs://master:60000</value>
  </property>
  <property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper/zookeeper-data</value>
  </property>
  <property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
  </property>
</configuration>

regionservers

master
slave1
slave2

解决方案:

/usr/local/hadoop/etc/hadoop/yarn-site.xml 配置文件中指定主机IP添加如下代码:

      <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>192.168.1.151</value>
       </property>

猜你喜欢

转载自blog.csdn.net/weixin_39394526/article/details/75106737
今日推荐