Spark Streaming整合Flume push方式报错-org.jboss.netty.channel.ChannelException: Failed to bind to

一、报错信息

ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: 

18/12/21 15:09:02 INFO ReceiverSupervisorImpl: Deregistering receiver 0
18/12/21 15:09:02 ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: gulfmoon/192.168.122.210:41414
	at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
	at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
	at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
	at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
	at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
	at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:162)
	at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:169)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130)
	at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:575)
	at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:565)
	at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1992)
	at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1992)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
	at org.apache.spark.scheduler.Task.run(Task.scala:89)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.BindException: Cannot assign requested address: bind
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
	at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
	... 3 more

18/12/21 15:09:02 INFO ReceiverSupervisorImpl: Stopped receiver 0

二、错误原因分析

报错信息简单明了:与主机绑定失败。首先想到的就是是不是端口被占用了,使用 netstat -anp|grep 命令查了并没有被占用,换了个端口重启程序,还是报同样的错误。看来这个错误不是端口被占用了这么简单。

就当快要放弃的时候,突然想到了官方文档,于是打开了下面的链接,希望能有所发现。

https://spark.apache.org/docs/1.6.2/streaming-flume-integration.html 

就是红色下划线那句话,引起了我的注意:

When your Flume + Spark Streaming application is launched, one of the Spark workers must run on that machine.

本机调试使用的是Windows电脑,但是绑定的主机确实服务器上的ip和port,根本就不是在一个机器上,除非使用spark-submit在服务器上提交作业才行。

三、解决方案

本机调试阶段,把程序的传入参数改了:

这样就可以了。

扫描二维码关注公众号,回复: 6745655 查看本文章

ps: 使用flume的pull模式可以指定服务器(非本机)

猜你喜欢

转载自blog.csdn.net/u011817217/article/details/85163376
今日推荐