【Spark】Spark常用方法总结4-SparkStreaming(Python版本)

SparkStreamingContext

spark = SparkSession.builder.appName('test').master('local[*]').getOrCreate()

ss = StreamingContext(spark.sparkContext, 10)
lines = ss.socketTextStream('10.255.77.183', 10086)
result = lines.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1)).reduceByKey(lambda x1, x2: x1 + x2)

# 设置输出
result.pprint()

# 启动并开始接受数据
ss.start()
# 等待处理的停止(手动或者因为任何出错).
ss.awaitTermination()

报错解决

关闭nc命令的执行,此时,Spark报错,提示9999端口连接不上

15/01/11 00:09:38 WARN receiver.ReceiverSupervisorImpl: Restarting receiver with delay 2000 ms: Error connecting to localhost:9999
java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at java.net.Socket.connect(Socket.java:528)
    at java.net.Socket.<init>(Socket.java:425)
    at java.net.Socket.<init>(Socket.java:208)
    at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:71)
    at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:57)

下面的代码收不到消息

val sparkConf = new SparkConf().setAppName("SparkStreamingExample").setMaster("local")

而下面的代码则能收到消息

val sparkConf = new SparkConf().setAppName("SparkStreamingExample").setMaster("local[2]")

原因来自于http://spark.apache.org/docs/latest/streaming-programming-guide.html:
When running a Spark Streaming program locally, do not use “local” or “local[1]” as the master URL. Either of these means that only one thread will be used for running tasks locally. If you are using a input DStream based on a receiver (e.g. sockets, Kafka, Flume, etc.), then the single thread will be used to run the receiver, leaving no thread for processing the received data. Hence, when running locally, always use “local[n]” as the master URL where n > number of receivers to run (see Spark Properties for information on how to set the master).
Extending the logic to running on a cluster, the number of cores allocated to the Spark Streaming application must be more than the number of receivers. Otherwise the system will receive data, but not be able to process them.

SparkStreaming源

SS提供了两种类型的内置流源:

  1. 基本源:StreamingContext API直接可用的源。例如:文件系统,套接字连接。
  2. 高级源:类似Kafka,Flume等等这样的源可以通过额外的工具类来使用。这要求链接到额外的依赖
    Source Artifact
Kafka   spark-streaming-kafka_2.10
Flume   spark-streaming-flume_2.10
Kinesis spark-streaming-kinesis-asl_2.10
Twitter spark-streaming-twitter_2.10
ZeroMQ  spark-streaming-zeromq_2.10
MQTT    spark-streaming-mqtt_2.10 

.

其余请参考Scala的SparkStreaming,Python对SparkStreaming不太友好

发布了6 篇原创文章 · 获赞 0 · 访问量 155

猜你喜欢

转载自blog.csdn.net/refbit/article/details/104109189
今日推荐