Adquira o hábito de escrever juntos! Este é o terceiro dia da minha participação no "Nuggets Daily New Plan · April Update Challenge", clique para ver os detalhes do evento .
1. Introdução
Um dos componentes básicos necessários para sistemas distribuídos é a estrutura de comunicação de rede distribuída. O Spark é um sistema de computação distribuído de uso geral e, por ser distribuído, deve haver muita comunicação entre os nós, para que os diferentes componentes do Spark se comuniquem ponto a ponto através de RPC (Remote Procedure Call).
01、driver 和master的通信,比如driver会向master发送RegisterApplication消息
02、master 和worker的通信,比如worker会向master上报worker上运行的Executor信息
03、executor 和driver的的通信,executor运行在worker上,spark的tasks 被分发到运行在各个executor中,executor 需要通过向driver发送任务运行结果。
04、worker和worker的通信,task运行期间需要从其他地方fetch数据,这些数据是由运行在其他worker上的executor上的task产生,因此需要到worker上fetch数据
复制代码
Antes do Spark-1.6, o RPC do Spark foi implementado com base no Akaa. Akka é uma estrutura de mensagens assíncrona baseada na linguagem Scala. Após o Spark-1.6, o Spark implementou uma estrutura rpc baseada em Netty, baseando-se no design de Akka.
1. Biblioteca Akka (scala)
A realização concreta do mecanismo de comunicação de rede spark-1.x
2. Biblioteca Netty (java)
A realização concreta do mecanismo de comunicação de rede spark-2.x
As versões anteriores do Spark usavam a estrutura de comunicação Netty para transferência de dados em massa e Akka para comunicação RPC.
Diagrama de estrutura de componentes do Spark RPC (ampliar se não estiver claro):
Explicação do texto:
1. Transportcontext contém: Transportconf e RpcHandler>
2. TransportConf é necessário para criar TransportClientFactory e TransportServer
3. RpcHandler é usado apenas para criar Transportserver
4. TransportClientFactory é a implementação de classe de fábrica do cliente do Spark RPC
5. Transportserver é a implementação de servidor do Spark RPC
6. TransportClient é a implementação cliente do Spark RPC
7. clientPool é um componente interno de TransportClientFactory e mantém o pool Transportclient
8. TransportserverBootstrap é o bootstrap do servidor Spark RPC
9、TransportClientBootstrap 是Spark RPC客户端引导程序
10、MessageEncoder和MessageDecoder编解码器
2 整体架构
下面介绍Spark RPC所包含的一些重要组件对象,使用Spark代码进行模拟展示
创建spark环境
val conf: SparkConf = new SparkConf()
val sparkSession = SparkSession.builder().config(conf).master("local[*]").appName("NX RPC").getOrCreate()
val sparkContext: SparkContext = sparkSession.sparkContext
val sparkEnv: SparkEnv = sparkContext.env
复制代码
RpcEnv
RpcEnv为RpcEndpoint 提供处理消息的环境。RpcEnv负责RpcEndpoint整个生命周期的管理,包括:注册endpoint,endpoint之间消息的路由,以及停止endpoint。
val rpcEnv: RpcEnv = sparkEnv.create(
name: String,
bindAddress: String,
advertiseAddress: String,
port: Int,
conf: SparkConf,
securityManager: SecurityManager,
numUsableCores: Int,
clientMode: Boolean): RpcEnv = {
val config = RpcEnvConfig(conf, name, bindAddress, advertiseAddress, port, securityManager,
numUsableCores, clientMode)
new NettyRpcEnvFactory().create( config)
}
复制代码
RpcEndpoint
表示一个个需要通信的个体(如master,worker,driver)主要根据接收的消息来进行对应的处理。一个RpcEndpoint经历的过程依次是:构建->onStart->receive->onStop。
其中onStart在接收任务消息前调用,receive和receiveAndReply分别用来接收另一个RpcEndpoint(也可以是本身) send和ask过来的消息。
rpcEnv.setupEndpoint(name: String, endpoint: RpcEndpoint): RpcEndpointRef
class HelloEndPoint(override val rpcEnv: RpcEnv) extends RpcEndpoint {
//在实例被构造出来的时候,自动执行一次
//类似于 akka 中的 actor 里面的 preStart()
override def onStart(): Unit = {
}
//服务方法
override def receive: PartialFunction[Any, Unit] = {
case ...
}
//服务方法
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
case ...
}
}
override def onStop(): Unit = {
...
}
}
复制代码
RpcEndpointRef
RpcEndpointRef 是对远程 RpcEndpoint 的一个引用。当我们需要向一个具体的RpcEndpoint发送消息时,一般我们需要获取到该 RpcEndpoint的引用,然后通过该应用发送消息。
rpcEnv.setupEndpointRef(address: RpcAddress, endpointName: String): RpcEndpointRef = {
setupEndpointRefByURI(RpcEndpointAddress(address, endpointName).toString)
}
复制代码
RpcAddress
表示远程的RpcEndpointRef的地址,Host+Port
3 Spark standalone案例
接下来,我们看一个具体的案例:在standalone模式中,worker会定时发心跳信息(SendHeartbeat)给master,那心跳消息是怎么从worker发送到master的呢,master又是怎么接收消息的?
- 首先找到work类的main函数,在main函数中创建了RpcEnv以及Endpoint
val rpcEnv = startRpcEnvAndEndpoint(args.host, args.port, args.webUiPort, args.cores, args.memory, args.masters, args.workDir,
conf = conf)
复制代码
- startRpcEnvAndEndpoint()方法中,rpcEnv创建了一个Endpoint(worker实例)
rpcEnv.setupEndpoint(ENDPOINT_NAME, new Worker(rpcEnv, webUiPort, cores,
memory, masterAddresses, ENDPOINT_NAME, workDir, conf, securityMgr))
复制代码
- 下面看new Worker的构造方法,定义了forwordMessageScheduler线程以及发送的间隔时间等等,此处只展现部分变量
private val forwordMessageScheduler = ThreadUtils.newDaemonSingleThreadScheduledExecutor("worker-forward-message-scheduler")
//心跳超时时间的 1/4
private val HEARTBEAT_MILLIS = conf.getLong("spark.worker.timeout", 60) * 1000 / 4
复制代码
-
构造方法完成之后,将调用Worker的onStart()方法,调用registerWithMaster()向 Master 进行注册
启动了一个 线程向自己发送了一个ReregisterWithMaster消息,该消息使用reregisterWithMaster()方法进行处理
onStart(){
registerWithMaster(){
registrationRetryTimer = Some(forwordMessageScheduler.scheduleAtFixedRate(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
Option(self).foreach(_.send(ReregisterWithMaster))
}
}, INITIAL_REGISTRATION_RETRY_INTERVAL_SECONDS, INITIAL_REGISTRATION_RETRY_INTERVAL_SECONDS, TimeUnit.SECONDS))
}
}
复制代码
reregisterWithMaster()方法创建了一个线程向master发送RegisterWorker消息
reregisterWithMaster(){
Array(registerMasterThreadPool.submit(new Runnable {
override def run(): Unit = {
try {
/**
* 1、先获取 Master 的 EndpointRef
* 2、然后发送 RegisterWorker 注册消息给 Master
*/
val masterEndpoint = rpcEnv.setupEndpointRef(masterAddress, Master.ENDPOINT_NAME)
sendRegisterMessageToMaster(masterEndpoint)
}
}}))}
sendRegisterMessageToMaster{
masterEndpoint.send(RegisterWorker(workerId, host, port, self, cores, memory, workerWebUiUrl, masterEndpoint.address))
}
复制代码
- 消息发送到worker主类进行注册,注册成功之后向worker发送RegisteredWorker消息
- 之后worker会定期(HEARTBEAT_MILLIS)向自己发送一个SendHeartbeat消息,worker接收到该消息之后,worker会调用sendToMaster函数,将Heartbeat消息(包含worker Id 和当前worker的rpcEndpoint引用)发送给master。
case SendHeartbeat => if (connected) {
//每隔 15s 执行一次
sendToMaster(Heartbeat(workerId, self))
}
复制代码
- 在worker的sendToMaster函数中,通过masterRef.send(message)将消息发送出去。那这个调用背后又做了什么事情呢?NettyRpcEnv中send的实现如下:
nettyEnv.send(new RequestMessage(nettyEnv.address, this, message))
复制代码
- 可以看到,当前发送地址(nettyEnv.address),目标的master地址(this)和发送的消息(SendHeartbeat)被封装成了RequestMessage消息,如果是远程rpc调用的话,
最终send将调用postToOutbox函数,并且此时消息会被序列化成Byte流。
private[netty] def send(message: RequestMessage): Unit = {
val remoteAddr = message.receiver.address
if (remoteAddr == address) {
// Message to a local RPC endpoint.
try {
// TODO 注释: send() 发送的是 OneWayMessage 消息
dispatcher.postOneWayMessage(message)
} catch {
case e: RpcEnvStoppedException => logDebug(e.getMessage)
}
} else {
// Message to a remote RPC endpoint.
postToOutbox(message.receiver, OneWayOutboxMessage(message.serialize(this)))
}
}
复制代码
- 在postToOutbox函数中,消息经过OutboxMessage中的sendWith方法(client.send(content)),最终通过TransportClient的sent方法
(client.send(content)),而在TransportClient中将消息进一步封装,然后发送给master。
public void send(ByteBuffer message) {
channel.writeAndFlush(new OneWayMessage(new NioManagedBuffer(message)));
}
复制代码
- 在master端TransportRequestHandler的handle方法中,由于心跳信息在worker端被封装成了OneWayMessage,所以在该handle方法中,将调用processOneWayMessage进行处理。
@Override
public void handle(RequestMessage request) {
if (request instanceof ChunkFetchRequest) {
processFetchRequest((ChunkFetchRequest) request);
} else if (request instanceof RpcRequest) {
processRpcRequest((RpcRequest) request);
} else if (request instanceof OneWayMessage) {
processOneWayMessage((OneWayMessage) request);
} else if (request instanceof StreamRequest) {
processStreamRequest((StreamRequest) request);
} else if (request instanceof UploadStream) {
processStreamUpload((UploadStream) request);
} else {
throw new IllegalArgumentException("Unknown request type: " + request);
}
}
复制代码
-
processOneWayMessage函数将调用rpcHandler的实现类NettyRpcHandler中的receive方法。在该方法中,首先通过internalReceive将消息解包成
RequestMessage。然后该消息通过dispatcher的分发给对应的endpoint。
override def receive(client: TransportClient, message: ByteBuffer): Unit = {
val messageToDispatch = internalReceive(client, message)
dispatcher.postOneWayMessage(messageToDispatch)
}
复制代码
- 、那消息是怎么分发的呢?在Dispatcher的postMessage方法中,可以看到,首先根据对应的endpoint得到EndpointData信息,然后将消息塞到给endpoint(此例中的master)的信箱中,最后将消息塞到receive的阻塞队列中)
private def postMessage(endpointName: String, message: InboxMessage, callbackIfStopped: (Exception) => Unit): Unit = {
val error = synchronized {
val data = endpoints.get(endpointName)
if (stopped) {
Some(new RpcEnvStoppedException())
} else if (data == null) {
Some(new SparkException(s"Could not find $endpointName."))
} else {
data.inbox.post(message)
receivers.offer(data)
None
}
} // We don't need to call `onStop` in the `synchronized` block
error.foreach(callbackIfStopped)
}
复制代码
-
那队列中的消息是怎么被消费的呢?在Dispatcher中有一个线程池threadpool在MessageLoop类的run方法中,将receive中的对象取出来,交由信箱的
process方法去处理。如果没有收到任何消息,将会阻塞在take处。
private class MessageLoop extends Runnable {
override def run(): Unit = {
try {
while (true) {
try {
//获取消息
val data = receivers.take()
if (data == PoisonPill) {
receivers.offer(PoisonPill)
return
}
//处理消息
data.inbox.process(Dispatcher.this)
} catch {
case NonFatal(e) => logError(e.getMessage, e)
}
}
}
}
}
复制代码
- 在inbox的process方法中,首先取出消息,然后根据消息的类型(此例中是oneWayMessage),最终将调用endpoint的receiver方法进行处理(也就是master中的receive方法)。至此,整个一次rpc调用的流程结束。
case OneWayMessage(_sender, content) => endpoint.receive.applyOrElse[Any, Unit](content, { msg =>
throw new SparkException(s"Unsupported message $message from ${_sender}")
})
复制代码
- master收到心跳消息进行处理更新该worker的workerInfo.lastHeartbeat
总结
本文对Spark的历史、网络通信架构、架构的主要角色以及各角色的创建方式进行了简单的叙述,并且以一个具体的Heartbeat的例子来进行较为升入的分析印证,希望以此可以加深人们对Spark的了解,本文到此结束,感兴趣的可以关注点赞,谢谢。