大规模数据处理中拒绝连接错误分析处理

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/jeffiny/article/details/79550796

1、处理的数据有几百个G,把数据处理成按照手机号计算1万多个特征 ;

2、数据处理环境:

     spark-2.0.2;

    --executor-memory 40g --total-executor-cores 120 --driver-memory 40g  

3、报的错误

org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 774
        at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:6
95)
        at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:6
91)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:691)
        at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:145)
        at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
        at org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

)
18/03/14 01:15:18 INFO scheduler.TaskSetManager: Task 151.1 in stage 1595.0 (TID 179950) failed, but another instance of the task has alread
y succeeded, so not re-queuing the task to be re-executed.
18/03/14 01:15:18 INFO scheduler.DAGScheduler: Marking ShuffleMapStage 1595 (csv at FeatherAnalyseOnline.java:65) as failed due to a fetch f
ailure from ShuffleMapStage 1594 (csv at FeatherAnalyseOnline.java:65)
18/03/14 01:15:18 INFO scheduler.DAGScheduler: ShuffleMapStage 1595 (csv at FeatherAnalyseOnline.java:65) failed in 65.360 s
18/03/14 01:15:18 INFO scheduler.DAGScheduler: Resubmitting ShuffleMapStage 1594 (csv at FeatherAnalyseOnline.java:65) and ShuffleMapStage 1
595 (csv at FeatherAnalyseOnline.java:65) due to fetch failure
18/03/14 01:15:18 INFO storage.BlockManagerInfo: Added broadcast_890_piece0 in memory on 192.168.200.172:42101 (size: 24.3 KB, free: 21.3 GB
)
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 827 to 192.168.200.172:32868
18/03/14 01:15:18 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 827 is 1082 bytes
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 828 to 192.168.200.172:32868
18/03/14 01:15:18 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 828 is 3991 bytes
18/03/14 01:15:18 INFO scheduler.DAGScheduler: Resubmitting failed stages
18/03/14 01:15:18 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 1516.1 (TID 179953, 192.168.200.168, partition 49, PROCESS_LOCAL
, 5699 bytes)
18/03/14 01:15:18 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 179953 on executor id: 6 hostname: 192.168.200.1
68.
18/03/14 01:15:18 WARN scheduler.TaskSetManager: Lost task 70.1 in stage 1595.0 (TID 179951, 192.168.200.168): FetchFailed(null, shuffleId=7
74, mapId=-1, reduceId=70, message=
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 774
        at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:6
95)
        at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:6
91)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:691)
        at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:145)
        at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
        at org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

)
18/03/14 01:15:18 INFO scheduler.TaskSetManager: Task 70.1 in stage 1595.0 (TID 179951) failed, but another instance of the task has already
 succeeded, so not re-queuing the task to be re-executed.
18/03/14 01:15:18 INFO scheduler.DAGScheduler: Resubmitting ShuffleMapStage 1594 (csv at FeatherAnalyseOnline.java:65) and ShuffleMapStage 1
595 (csv at FeatherAnalyseOnline.java:65) due to fetch failure
18/03/14 01:15:18 INFO storage.BlockManagerInfo: Added broadcast_890_piece0 in memory on 192.168.200.168:41381 (size: 24.3 KB, free: 21.2 GB
)
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 827 to 192.168.200.168:59086
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 828 to 192.168.200.168:59086
18/03/14 01:15:18 INFO scheduler.DAGScheduler: Resubmitting failed stages
18/03/14 01:15:18 INFO storage.BlockManagerInfo: Added broadcast_884_piece0 in memory on 192.168.200.165:36764 (size: 8.3 KB, free: 19.7 GB)
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 145 to 192.168.200.169:52372
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 145 to 192.168.200.170:48698
18/03/14 01:15:18 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 145 is 4770 bytes
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 145 to 192.168.200.175:47132
18/03/14 01:15:18 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 1516.1 (TID 179954, 192.168.200.168, partition 64, PROCESS_LOCAL
, 5699 bytes)
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 145 to 192.168.200.172:32868
18/03/14 01:15:18 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 145 to 192.168.200.166:44376
18/03/14 01:15:18 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 179954 on executor id: 6 hostname: 192.168.200.1
68.
18/03/14 01:15:18 WARN scheduler.TaskSetManager: Lost task 40.0 in stage 1595.0 (TID 178297, 192.168.200.168): FetchFailed(BlockManagerId(4,
 192.168.200.164, 34412), shuffleId=774, mapId=50, reduceId=40, message=
org.apache.spark.shuffle.FetchFailedException: Failed to connect to /192.168.200.164:34412
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:357)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:54)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:96)
        at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:94)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to connect to /192.168.200.164:34412
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
        at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:96)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
Caused by: java.net.ConnectException: Connection refused: /192.168.200.164:34412
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        ... 1 more

)

4、报错的处理;

shuffle分为shuffle writeshuffle read两部分。 
shuffle write的分区数由上一阶段的RDD分区数控制,shuffle read的分区数则是由Spark提供的一些参数控制。

shuffle write可以简单理解为类似于saveAsLocalDiskFile的操作,将计算的中间结果按某种规则临时放到各个executor所在的本地磁盘上。

shuffle read的时候数据的分区数则是由spark提供的一些参数控制。可以想到的是,如果这个参数值设置的很小,同时shuffle read的量很大,那么将会导致一个task需要处理的数据非常大。结果导致JVM crash,从而导致取shuffle数据失败,同时executor也丢失了,看到Failed to connect to host的错误,也就是executor lost的意思。有时候即使不会导致JVM crash也会造成长时间的gc。

解决办法

知道原因后问题就好解决了,主要从shuffle的数据量和处理shuffle数据的分区数两个角度入手。

  1. 减少shuffle数据

    思考是否可以使用map side join或是broadcast join来规避shuffle的产生。

    将不必要的数据在shuffle前进行过滤,比如原始数据有20个字段,只要选取需要的字段进行处理即可,将会减少一定的shuffle数据。

  2. SparkSQL和DataFrame的join,group by等操作

    通过spark.sql.shuffle.partitions控制分区数,默认为200,根据shuffle的量以及计算的复杂度提高这个值。

  3. Rdd的join,groupBy,reduceByKey等操作

    通过spark.default.parallelism控制shuffle read与reduce处理的分区数,默认为运行任务的core的总数(mesos细粒度模式为8个,local模式为本地的core总数),官方建议为设置成运行任务的core的2-3倍。

  4. 提高executor的内存

    通过spark.executor.memory适当提高executor的memory值。

  5. 是否存在数据倾斜的问题

    空值是否已经过滤?异常数据(某个key数据特别大)是否可以单独处理?考虑改变数据分区规则。

org.apache.spark.storage.BlockNotFoundException: Block broadcast_938_piece1 not found
        at org.apache.spark.storage.BlockManager.getBlockData(BlockManager.scala:287)
        at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$2.apply(NettyBlockRpcServer.scala:60)
        at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$2.apply(NettyBlockRpcServer.scala:60)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at org.apache.spark.network.netty.NettyBlockRpcServer.receive(NettyBlockRpcServer.scala:60)
        at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:159)
        at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:107)
        at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
        at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Starting task 100.0 in stage 1947.0 (TID 1419450, 192.168.200.167, partition 100, PROCESS_LOCAL, 5371 bytes)
18/03/14 13:57:30 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1419450 on executor id: 8 hostname: 192.168.200.167.
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Finished task 70.0 in stage 1955.0 (TID 1419178) in 7460 ms on 192.168.200.167 (413/1000)
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Starting task 101.0 in stage 1947.0 (TID 1419451, 192.168.200.175, partition 101, PROCESS_LOCAL, 5371 bytes)
18/03/14 13:57:30 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1419451 on executor id: 1 hostname: 192.168.200.175.
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Finished task 183.0 in stage 1951.0 (TID 1419137) in 7627 ms on 192.168.200.175 (698/1000)
18/03/14 13:57:30 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 477 to 192.168.200.166:60518
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Starting task 102.0 in stage 1947.0 (TID 1419452, 192.168.200.166, partition 102, PROCESS_LOCAL, 5371 bytes)
18/03/14 13:57:30 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1419452 on executor id: 4 hostname: 192.168.200.166.
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Starting task 103.0 in stage 1947.0 (TID 1419453, 192.168.200.171, partition 103, PROCESS_LOCAL, 5371 bytes)
18/03/14 13:57:30 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1419453 on executor id: 3 hostname: 192.168.200.171.
18/03/14 13:57:30 INFO scheduler.TaskSetManager: Finished task 59.0 in stage 1955.0 (TID 1419158) in 7693 ms on 192.168.200.171 (414/1000)
18/03/14 13:57:30 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1947.0 (TID 1419351, 192.168.200.166): java.io.IOException: org.apache.spark.SparkException: Failed to get broadcast_938_piece1 of broadcast_938
        at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1283)
        at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:174)
        at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:65)
        at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:65)
        at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:89)
        at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
        at org.apache.spark.MapOutputTracker$$anonfun$deserializeMapStatuses$1.apply(MapOutputTracker.scala:661)
        at org.apache.spark.MapOutputTracker$$anonfun$deserializeMapStatuses$1.apply(MapOutputTracker.scala:661)
        at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
        at org.apache.spark.MapOutputTracker$.logInfo(MapOutputTracker.scala:598)
        at org.apache.spark.MapOutputTracker$.deserializeMapStatuses(MapOutputTracker.scala:660)
        at org.apache.spark.MapOutputTracker.getStatuses(MapOutputTracker.scala:203)
        at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:142)
        at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
        at org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to get broadcast_938_piece1 of broadcast_938
        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:146)
        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:125)
        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:125)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:125)
        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:186)
        at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1276)
        ... 32 more



org.apache.spark.shuffle.FetchFailedException: Failed to send RPC 6572748480227485466 to /192.168.200.164:44766: java.nio.channels.ClosedChannelException
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:357)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:54)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:96)
        at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:94)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to send RPC 6572748480227485466 to /192.168.200.164:44766: java.nio.channels.ClosedChannelException
        at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:249)
        at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:233)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
        at io.netty.channel.ChannelOutboundBuffer.safeFail(ChannelOutboundBuffer.java:678)
        at io.netty.channel.ChannelOutboundBuffer.remove0(ChannelOutboundBuffer.java:298)
        at io.netty.channel.ChannelOutboundBuffer.failFlushed(ChannelOutboundBuffer.java:621)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:589)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1107)
        at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:543)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:528)
        at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
        at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:543)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:528)
        at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
        at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:543)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:528)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:394)
        at org.apache.spark.network.server.TransportChannelHandler.userEventTriggered(TransportChannelHandler.java:147)
        at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:279)
        at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:265)
        at io.netty.handler.timeout.IdleStateHandler.channelIdle(IdleStateHandler.java:340)
        at io.netty.handler.timeout.IdleStateHandler$AllIdleTimeoutTask.run(IdleStateHandler.java:455)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        ... 1 more
Caused by: java.nio.channels.ClosedChannelException

)

猜你喜欢

转载自blog.csdn.net/jeffiny/article/details/79550796