RPC 是远程过程调用(Remote Procedure Call),即远程调用其他虚拟机中运行的 java object。RPC 是一种客户端/服务器模式,那么在使用时包括服务端代码和客户端代码,还有我们调用的远程过程对象。HDFS 的运行就是建立在此基础之上的。本文通过分析实现一个简单的 RPC 程序来分析HDFS 的运行机理。
(1)在pom 文件中加入相关依赖:
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.1.3</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.30</version>
</dependency>
</dependencies>
(2)在项目的src/main/resources目录下,新建一个文件,命名为“log4j.properties”,在文件中填入
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d%p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d%p [%c] - %m%n
(3)创建RPC协议 接口
package com.c21.demo;
//创建Rpc接口协议
public interface RPCProtocol {
// 版本id
long versionID=666;
//创建文件夹
void mkdirs(String path);
}
(4)创建RPC 服务端:
package com.c21.demo;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.Server;
import java.io.IOException;
//实现Rpc接口协议 的RPC服务端
public class NNServer implements RPCProtocol {
// 重写Rpc协议的方法
public void mkdirs(String path) {
System.out.println("服务端,创建路径" + path);
}
public static void main(String[] args) throws IOException {
//创建RPC 服务
Server server = new RPC.Builder(new Configuration()).setBindAddress("localhost").setPort(8888)
.setProtocol(RPCProtocol.class).setInstance(new NNServer()).build();
System.out.println("服务器开始工作");
//启动服务
server.start();
}
}
(5)创建HDFSClient客户端
package com.c21.demo;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import java.io.IOException;
import java.net.InetSocketAddress;
//创建HDFSClient
public class HDFSClient {
public static void main(String[] args) throws IOException {
//获取服务端代理
RPCProtocol client = RPC.getProxy(RPCProtocol.class, RPCProtocol.versionID, new InetSocketAddress("localhost", 8888), new Configuration());
System.out.println("我是客户端");
//调用服务端的创建方法。
client.mkdirs("/input");
}
}
运行服务端:结果如下:
jps在黑窗口查看NNserver 进程:
运行客户端:结果如下:
服务端那边显示:
下面是NameNode初始化的源码:
private void initialize(Configuration conf) throws IOException {
InetSocketAddress socAddr = getAddress(conf);
UserGroupInformation.setConfiguration(conf);
SecurityUtil.login(conf, "dfs.namenode.keytab.file", "dfs.namenode.kerberos.principal", socAddr.getHostName());
int handlerCount = conf.getInt("dfs.namenode.handler.count", 10);
if (this.serviceAuthEnabled = conf.getBoolean("hadoop.security.authorization", false)) {
PolicyProvider policyProvider = (PolicyProvider)ReflectionUtils.newInstance(conf.getClass("hadoop.security.authorization.policyprovider", HDFSPolicyProvider.class, PolicyProvider.class), conf);
ServiceAuthorizationManager.refresh(conf, policyProvider);
}
myMetrics = NameNodeInstrumentation.create(conf);
this.namesystem = new FSNamesystem(this, conf);
boolean alwaysUseDelegationTokensForTests = conf.getBoolean("dfs.namenode.delegation.token.always-use", false);
if (UserGroupInformation.isSecurityEnabled() || alwaysUseDelegationTokensForTests) {
this.namesystem.activateSecretManager();
}
InetSocketAddress dnSocketAddr = this.getServiceRpcServerAddress(conf);
if (dnSocketAddr != null) {
int serviceHandlerCount = conf.getInt("dfs.namenode.service.handler.count", 10);
this.serviceRpcServer = RPC.getServer(this, dnSocketAddr.getHostName(), dnSocketAddr.getPort(), serviceHandlerCount, false, conf, this.namesystem.getDelegationTokenSecretManager());
this.serviceRPCAddress = this.serviceRpcServer.getListenerAddress();
this.setRpcServiceServerAddress(conf);
}
this.server = RPC.getServer(this, socAddr.getHostName(), socAddr.getPort(), handlerCount, false, conf, this.namesystem.getDelegationTokenSecretManager());
this.server.addTerseExceptions(new Class[]{SafeModeException.class});
this.serverAddress = this.server.getListenerAddress();
FileSystem.setDefaultUri(conf, getUri(this.serverAddress));
LOG.info("Namenode up at: " + this.serverAddress);
this.startHttpServer(conf);
this.server.start();
if (this.serviceRpcServer != null) {
this.serviceRpcServer.start();
}
this.startTrashEmptier(conf);
this.plugins = conf.getInstances("dfs.namenode.plugins", ServicePlugin.class);
Iterator i$ = this.plugins.iterator();
while(i$.hasNext()) {
ServicePlugin p = (ServicePlugin)i$.next();
try {
p.start(this);
} catch (Throwable var9) {
LOG.warn("ServicePlugin " + p + " could not be started", var9);
}
}
}
发现上述创建rpc服务,并启动服务
当HdfsClien 创建一个文件夹命令,服务端接收该命令,创建文件夹。