Each part usually use Hadoop cluster to multiple ports, some of which are of use to interact between daemon, RPC access, and some are used for HTTP access. With the increase of peripheral components of Hadoop, which fully corresponds to which port the application can not remember, so special collection records for the query.
Here we use to contain components: HDFS, YARN, HBase, Hive, ZooKeeper:
Package |
node |
The default port |
Configuration |
Instructions for use |
HDFS |
DataNode |
50010 |
dfs.datanode.address |
datanode service port for data transmission |
HDFS |
DataNode |
50075 |
dfs.datanode.http.address |
Http port services |
HDFS |
DataNode |
50475 |
dfs.datanode.https.address |
Https port services |
HDFS |
DataNode |
50020 |
dfs.datanode.ipc.address |
Ipc port services |
HDFS |
NameNode |
50070 |
dfs.namenode.http-address |
Http port services |
HDFS |
NameNode |
50470 |
dfs.namenode.https-address |
Https port services |
HDFS |
NameNode |
8020 |
fs.defaultFS |
Client receives the RPC ports connected to, information for acquiring the file system metadata. |
HDFS |
journalnode |
8485 |
dfs.journalnode.rpc-address |
RPC Service |
HDFS |
journalnode |
8480 |
dfs.journalnode.http-address |
HTTP Service |
HDFS |
ZKFC |
8019 |
dfs.ha.zkfc.port |
ZooKeeper FailoverController, for NN HA |
YARN |
ResourceManager |
8032 |
yarn.resourcemanager.address |
RM's applications manager (ASM) port |
YARN |
ResourceManager |
8030 |
yarn.resourcemanager.scheduler.address |
IPC port scheduler component |
YARN |
ResourceManager |
8031 |
yarn.resourcemanager.resource-tracker.address |
IPC |
YARN |
ResourceManager |
8033 |
yarn.resourcemanager.admin.address |
IPC |
YARN |
ResourceManager |
8088 |
yarn.resourcemanager.webapp.address |
http service port |
YARN |
NodeManager |
8040 |
yarn.nodemanager.localizer.address |
localizer IPC |
YARN |
NodeManager |
8042 |
yarn.nodemanager.webapp.address |
http service port |
YARN |
NodeManager |
8041 |
yarn.nodemanager.address |
NM port in container manager |
YARN |
JobHistory Server |
10020 |
mapreduce.jobhistory.address |
IPC |
YARN |
JobHistory Server |
19888 |
mapreduce.jobhistory.webapp.address |
http service port |
HBase |
Master |
60000 |
hbase.master.port |
IPC |
HBase |
Master |
60010 |
hbase.master.info.port |
http service port |
HBase |
RegionServer |
60020 |
hbase.regionserver.port |
IPC |
HBase |
RegionServer |
60030 |
hbase.regionserver.info.port |
http service port |
HBase |
HQuorumPeer |
2181 |
hbase.zookeeper.property.clientPort |
HBase-managed ZK mode, use a separate ZooKeeper cluster does not enable the port. |
HBase |
HQuorumPeer |
2888 |
hbase.zookeeper.peerport |
HBase-managed ZK mode, use a separate ZooKeeper cluster does not enable the port. |
HBase |
HQuorumPeer |
3888 |
hbase.zookeeper.leaderport |
HBase-managed ZK mode, use a separate ZooKeeper cluster does not enable the port. |
Hive |
Metastore |
9083 |
/ Etc / default / hive-metastore the export PORT = update the default port |
|
Hive |
HiveServer |
10000 |
/etc/hive/conf/hive-env.sh in export HIVE_SERVER2_THRIFT_PORT = to update the default port |
|
ZooKeeper |
Server |
2181 |
/etc/zookeeper/conf/zoo.cfg in clientPort = |
Port provides services to clients |
ZooKeeper |
Server |
2888 |
/etc/zookeeper/conf/zoo.cfg in server.x = [hostname]: nnnnn [: nnnnn], in blue part |
follower用来连接到leader,只在leader上监听该端口。 |
ZooKeeper |
Server |
3888 |
/etc/zookeeper/conf/zoo.cfg中server.x=[hostname]:nnnnn[:nnnnn],标蓝部分 |
用于leader选举的。只在electionAlg是1,2或3(默认)时需要。 |
所有端口协议均基于TCP。
对于存在Web UI(HTTP服务)的所有hadoop daemon,有如下url:
/logs
日志文件列表,用于下载和查看
/logLevel
允许你设定log4j的日志记录级别,类似于hadoop daemonlog
/stacks
所有线程的stack trace,对于debug很有帮助
/jmx
服务端的Metrics,以JSON格式输出。
/jmx?qry=Hadoop:*会返回所有hadoop相关指标。
/jmx?get=MXBeanName::AttributeName 查询指定bean指定属性的值,例如/jmx?get=Hadoop:service=NameNode,name=NameNodeInfo::ClusterId会返回ClusterId。
这个请求的处理类:org.apache.hadoop.jmx.JMXJsonServlet
而特定的Daemon又有特定的URL路径特定相应信息。
NameNode:http://:50070/
/dfshealth.jsp
HDFS信息页面,其中有链接可以查看文件系统
/dfsnodelist.jsp?whatNodes=(DEAD|LIVE)
显示DEAD或LIVE状态的datanode
/fsck
运行fsck命令,不推荐在集群繁忙时使用!
DataNode:http://:50075/
/blockScannerReport
每个datanode都会指定间隔验证块信息