kafaka+spark+hdfs简单实例

1.spark的standalone搭建很简单,简单列出相应的配置文件.
vi spark-env.sh
#!/usr/bin/env bash
export SCALA_HOME=/opt/scala-2.10.3
export JAVA_HOME=/opt/jdk1.7.0_79
export SPARK_MASTER_IP=192.168.1.16
export SPARK_WORKER_INSTANCES=3
export SPARK_MASTER_PORT=7776
export SPARK_MASTER_WEBUI_PORT=7777
export SPARK_WORKER_PORT=7778
export SPARK_WORKER_MEMORY=5000m
export SPARK_JAVA_OPTS="-Dspark.cores.max=4"
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=192.168.1.16:2181,192.168.1.17:2181,192.168.1.18:2181 -Dspark.deploy.zookeeper.dir=/spark"
vi slaves
shaobao17
shaobao18
shaobao19
将配置好的spark,scp到相应的机器上。访问http://shaobao16:7777/会出现相应的spark界面。
2kafak配置
vi server.properties
broker.id=16
port=9092
log.dirs=/tmp/kafka-logs16
zookeeper.connect=shaobao16:2181,shaobao17:2181,shaobao18:2181
各个机器上的broker.id,port,log.dirs是不同的,修改相应的配置
3创建topic mykafka(创建过程见kafka官网有详细的说明)
启动topic
bin/kafka-console-producer.sh --broker-list shaobao16:9092 --topic mykafka
出现以下信息
[2015-05-07 18:30:57,761] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
此处处于阻塞状态,输入相应的信息;
sfsfsf
gdgdg
dgdgd
sfsf
aadfsfsfsdfsf
sffsfsfsfsf
sfsdfsfsf
111111111111111111111111111111111111111111111111111
sfsfsfs
sfsfsfsdfsdfsfsfsf5555555555555555555555555555
hello world a is big city
he is a good man
she is a big man
he is a big city
oh my god ^H
ik ok thank your
he is a big man
ok thankyour and your
he i a name
he is a big man he
he is a storm is he a storm ok tsfsf
he is a big man
he is a big man ok ,l kike he
sdfsfsdf  id   id s fs
he is a big man ok he is a bi city
he is bifs sdfsf id he is
he si big a the is sfs
he is a big man hei is a big city
he is abig man ,o k 123 234 123 234
aaaaaaaaa aaaaa bbbbbbb
he is a big man ok , 11 22 1 21 1 2
he is a sfs sfsfsf sdf sfsd sfsfdsd sf sd fsd fs f fsdfsdf sd fs ff fsdf ds f  fsfsdf sf 1 1  1 1 1 1 1 1 1 1 1 1  1 1 1 1 1 1 1 1 1 1 1 1 1   1 1 1  1 1 1 1 1 
^[[B^[[B^[[B^[[B^[[B^[[B^[[B
编写处理这些信息的jar
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements.  See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License.  You may obtain a copy of the License at
*
*    http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.streaming;

import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.regex.Pattern;

import kafka.serializer.StringDecoder;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import scala.Tuple2;

/**
* Consumes messages from one or more topics in Kafka and does wordcount.
* Usage: DirectKafkaWordCount <brokers> <topics>
*   <brokers> is a list of one or more Kafka brokers
*   <topics> is a list of one or more kafka topics to consume from
*
* Example:
*    $ bin/run-example streaming.KafkaWordCount broker1-host:port,broker2-host:port topic1,topic2
*/

public final class JavaDirectKafkaWordCount {
  private static final Pattern SPACE = Pattern.compile(" ");

  public static void main(String[] args) {
  System.out.println("---------------aaaaaaaa-------------");
    if (args.length < 2) {
      System.err.println("Usage: DirectKafkaWordCount <brokers> <topics>\n" +
          "  <brokers> is a list of one or more Kafka brokers\n" +
          "  <topics> is a list of one or more kafka topics to consume from\n\n");
      System.exit(1);
    }

    //StreamingExamples.setStreamingLogLevels();

    String brokers = args[0];
    String topics = args[1];

    // Create context with 2 second batch interval
    SparkConf sparkConf = new SparkConf().setAppName("JavaDirectKafkaWordCount");
    JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(2));

    HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(topics.split(",")));
    HashMap<String, String> kafkaParams = new HashMap<String, String>();
    kafkaParams.put("metadata.broker.list", brokers);
    System.out.println("----------------bbbbbbbbbbb------------------");
    // Create direct kafka stream with brokers and topics
    JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
        jssc,
        String.class,
        String.class,
        StringDecoder.class,
        StringDecoder.class,
        kafkaParams,
        topicsSet
    );
    System.out.println("-----------cccccccccccc-------------");
    // Get the lines, split them into words, count the words and print
    JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
      @Override
      public String call(Tuple2<String, String> tuple2) {
        return tuple2._2();
      }
    });
    System.out.println("-----------dddddddddd-------------");
    JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
      @Override
      public Iterable<String> call(String x) {
    String arr[] =  SPACE.split(x);
    return Arrays.asList(arr);
       // return Lists.newArrayList(SPACE.split(x));
      }
    });
    System.out.println("------------eeeeeeeeeeeeee-----------");
    JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
      new PairFunction<String, String, Integer>() {
        @Override
        public Tuple2<String, Integer> call(String s) {
          return new Tuple2<String, Integer>(s, 1);
        }
      }).reduceByKey(
        new Function2<Integer, Integer, Integer>() {
        @Override
        public Integer call(Integer i1, Integer i2) {
          return i1 + i2;
        }
      });
    System.out.println("------------ffffffffffffff--------------");
    System.out.println( wordCounts.count()+"           is ok");
    wordCounts.print();
   // wordCounts.saveAsHadoopFiles(prefix, suffix, keyClass, valueClass, outputFormatClass);
    wordCounts.foreachRDD(new Function<JavaPairRDD<String,Integer>, Void>() {
@Override
public Void call(JavaPairRDD<String, Integer> arg0) throws Exception {
// TODO Auto-generated method stub
arg0.saveAsObjectFile("hdfs://192.168.1.16:9000/sort/price");
return null;
}
});
   // wordCounts.saveAsHadoopFiles("one", "one");
    System.out.println("------------hhhhhhhhhhh--------------");
    // Start the computation
    jssc.start();
    System.out.println("------------jjjjjjjjjjjjjjjjjjjj-------------");
    jssc.awaitTermination();
    System.out.println("------------iiiiiiiiiiiiiii-------------");
  }
}
启动spark-submit
./bin/spark-submit --master spark://192.168.1.16:7776 --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount  --name kafka  --executor-memory 400M --driver-memory 512M  --jars /opt/spark-1.3.1-bin-hadoop2.6/lib/spark-assembly-1.3.1-hadoop2.6.0.jar --jars /opt/spark-1.3.1-bin-hadoop2.6/lib/spark-examples-1.3.1-hadoop2.6.0.jar  /opt/resource/kafkawc.jar shaobao16:9092,shaobao17:9093,shaobao18:9094 mykafka
此时sparkStream一直在运行着,两秒提交一次job
-------------------------------------------
Time: 1431146040000 ms
-------------------------------------------

15/05/08 21:34:00 INFO JobScheduler: Finished job streaming job 1431146040000 ms.0 from job set of time 1431146040000 ms
15/05/08 21:34:00 INFO JobScheduler: Starting job streaming job 1431146040000 ms.1 from job set of time 1431146040000 ms
15/05/08 21:34:00 INFO SequenceFileRDDFunctions: Saving as sequence file of type (NullWritable,BytesWritable)
15/05/08 21:34:00 INFO SparkContext: Starting job: foreachRDD at JavaDirectKafkaWordCount.java:121
15/05/08 21:34:00 INFO DAGScheduler: Got job 9116 (foreachRDD at JavaDirectKafkaWordCount.java:121) with 2 output partitions (allowLocal=false)
15/05/08 21:34:00 INFO DAGScheduler: Final stage: Stage 18233(foreachRDD at JavaDirectKafkaWordCount.java:121)
15/05/08 21:34:00 INFO DAGScheduler: Parents of final stage: List(Stage 18232)
15/05/08 21:34:00 INFO DAGScheduler: Missing parents: List()
15/05/08 21:34:00 INFO DAGScheduler: Submitting Stage 18233 (MapPartitionsRDD[21272] at foreachRDD at JavaDirectKafkaWordCount.java:121), which has no missing parents
15/05/08 21:34:00 INFO MemoryStore: ensureFreeSpace(127608) called with curMem=3297084, maxMem=278302556
15/05/08 21:34:00 INFO MemoryStore: Block broadcast_12155 stored as values in memory (estimated size 124.6 KB, free 262.1 MB)
15/05/08 21:34:00 INFO MemoryStore: ensureFreeSpace(76638) called with curMem=3424692, maxMem=278302556
15/05/08 21:34:00 INFO MemoryStore: Block broadcast_12155_piece0 stored as bytes in memory (estimated size 74.8 KB, free 262.1 MB)
15/05/08 21:34:00 INFO BlockManagerInfo: Added broadcast_12155_piece0 in memory on shaobao16:52385 (size: 74.8 KB, free: 264.1 MB)
15/05/08 21:34:00 INFO BlockManagerMaster: Updated info of block broadcast_12155_piece0
15/05/08 21:34:00 INFO SparkContext: Created broadcast 12155 from broadcast at DAGScheduler.scala:839
15/05/08 21:34:00 INFO DAGScheduler: Submitting 2 missing tasks from Stage 18233 (MapPartitionsRDD[21272] at foreachRDD at JavaDirectKafkaWordCount.java:121)
15/05/08 21:34:00 INFO TaskSchedulerImpl: Adding task set 18233.0 with 2 tasks
15/05/08 21:34:00 INFO TaskSetManager: Starting task 0.0 in stage 18233.0 (TID 15193, shaobao19, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:00 INFO TaskSetManager: Starting task 1.0 in stage 18233.0 (TID 15194, shaobao17, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:00 INFO BlockManagerInfo: Added broadcast_12155_piece0 in memory on shaobao19:38410 (size: 74.8 KB, free: 207.0 MB)
15/05/08 21:34:00 INFO BlockManagerInfo: Added broadcast_12155_piece0 in memory on shaobao17:41922 (size: 74.8 KB, free: 206.7 MB)
15/05/08 21:34:00 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 3038 to sparkExecutor@shaobao19:45578
15/05/08 21:34:00 INFO TaskSetManager: Finished task 1.0 in stage 18233.0 (TID 15194) in 82 ms on shaobao17 (1/2)
15/05/08 21:34:00 INFO TaskSetManager: Finished task 0.0 in stage 18233.0 (TID 15193) in 87 ms on shaobao19 (2/2)
15/05/08 21:34:00 INFO TaskSchedulerImpl: Removed TaskSet 18233.0, whose tasks have all completed, from pool
15/05/08 21:34:00 INFO DAGScheduler: Stage 18233 (foreachRDD at JavaDirectKafkaWordCount.java:121) finished in 0.088 s
15/05/08 21:34:00 INFO DAGScheduler: Job 9116 finished: foreachRDD at JavaDirectKafkaWordCount.java:121, took 0.109517 s
15/05/08 21:34:00 INFO JobScheduler: Finished job streaming job 1431146040000 ms.1 from job set of time 1431146040000 ms
15/05/08 21:34:00 INFO JobScheduler: Total delay: 0.235 s for time 1431146040000 ms (execution: 0.225 s)
15/05/08 21:34:00 INFO ShuffledRDD: Removing RDD 21263 from persistence list
15/05/08 21:34:00 INFO BlockManager: Removing RDD 21263
15/05/08 21:34:00 INFO MapPartitionsRDD: Removing RDD 21262 from persistence list
15/05/08 21:34:00 INFO BlockManager: Removing RDD 21262
15/05/08 21:34:00 INFO MapPartitionsRDD: Removing RDD 21261 from persistence list
15/05/08 21:34:00 INFO BlockManager: Removing RDD 21261
15/05/08 21:34:00 INFO MapPartitionsRDD: Removing RDD 21260 from persistence list
15/05/08 21:34:00 INFO BlockManager: Removing RDD 21260
15/05/08 21:34:00 INFO KafkaRDD: Removing RDD 21259 from persistence list
15/05/08 21:34:00 INFO BlockManager: Removing RDD 21259
15/05/08 21:34:00 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
15/05/08 21:34:02 INFO JobScheduler: Added jobs for time 1431146042000 ms
15/05/08 21:34:02 INFO SparkContext: Starting job: print at JavaDirectKafkaWordCount.java:120
15/05/08 21:34:02 INFO JobScheduler: Starting job streaming job 1431146042000 ms.0 from job set of time 1431146042000 ms
15/05/08 21:34:02 INFO DAGScheduler: Registering RDD 21276 (mapToPair at JavaDirectKafkaWordCount.java:105)
15/05/08 21:34:02 INFO DAGScheduler: Got job 9117 (print at JavaDirectKafkaWordCount.java:120) with 1 output partitions (allowLocal=true)
15/05/08 21:34:02 INFO DAGScheduler: Final stage: Stage 18235(print at JavaDirectKafkaWordCount.java:120)
15/05/08 21:34:02 INFO DAGScheduler: Parents of final stage: List(Stage 18234)
15/05/08 21:34:02 INFO DAGScheduler: Missing parents: List(Stage 18234)
15/05/08 21:34:02 INFO DAGScheduler: Submitting Stage 18234 (MapPartitionsRDD[21276] at mapToPair at JavaDirectKafkaWordCount.java:105), which has no missing parents
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(4640) called with curMem=3501330, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12156 stored as values in memory (estimated size 4.5 KB, free 262.1 MB)
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(3235) called with curMem=3505970, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12156_piece0 stored as bytes in memory (estimated size 3.2 KB, free 262.1 MB)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12156_piece0 in memory on shaobao16:52385 (size: 3.2 KB, free: 264.1 MB)
15/05/08 21:34:02 INFO BlockManagerMaster: Updated info of block broadcast_12156_piece0
15/05/08 21:34:02 INFO SparkContext: Created broadcast 12156 from broadcast at DAGScheduler.scala:839
15/05/08 21:34:02 INFO DAGScheduler: Submitting 1 missing tasks from Stage 18234 (MapPartitionsRDD[21276] at mapToPair at JavaDirectKafkaWordCount.java:105)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Adding task set 18234.0 with 1 tasks
15/05/08 21:34:02 INFO TaskSetManager: Starting task 0.0 in stage 18234.0 (TID 15195, shaobao17, ANY, 1291 bytes)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12156_piece0 in memory on shaobao17:55250 (size: 3.2 KB, free: 207.0 MB)
15/05/08 21:34:02 INFO TaskSetManager: Finished task 0.0 in stage 18234.0 (TID 15195) in 15 ms on shaobao17 (1/1)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Removed TaskSet 18234.0, whose tasks have all completed, from pool
15/05/08 21:34:02 INFO DAGScheduler: Stage 18234 (mapToPair at JavaDirectKafkaWordCount.java:105) finished in 0.016 s
15/05/08 21:34:02 INFO DAGScheduler: looking for newly runnable stages
15/05/08 21:34:02 INFO DAGScheduler: running: Set()
15/05/08 21:34:02 INFO DAGScheduler: waiting: Set(Stage 18235)
15/05/08 21:34:02 INFO DAGScheduler: failed: Set()
15/05/08 21:34:02 INFO DAGScheduler: Missing parents for Stage 18235: List()
15/05/08 21:34:02 INFO DAGScheduler: Submitting Stage 18235 (ShuffledRDD[21277] at reduceByKey at JavaDirectKafkaWordCount.java:111), which is now runnable
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(2296) called with curMem=3509205, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12157 stored as values in memory (estimated size 2.2 KB, free 262.1 MB)
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(1702) called with curMem=3511501, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12157_piece0 stored as bytes in memory (estimated size 1702.0 B, free 262.1 MB)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12157_piece0 in memory on shaobao16:52385 (size: 1702.0 B, free: 264.1 MB)
15/05/08 21:34:02 INFO BlockManagerMaster: Updated info of block broadcast_12157_piece0
15/05/08 21:34:02 INFO SparkContext: Created broadcast 12157 from broadcast at DAGScheduler.scala:839
15/05/08 21:34:02 INFO DAGScheduler: Submitting 1 missing tasks from Stage 18235 (ShuffledRDD[21277] at reduceByKey at JavaDirectKafkaWordCount.java:111)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Adding task set 18235.0 with 1 tasks
15/05/08 21:34:02 INFO TaskSetManager: Starting task 0.0 in stage 18235.0 (TID 15196, shaobao17, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12157_piece0 in memory on shaobao17:55250 (size: 1702.0 B, free: 207.0 MB)
15/05/08 21:34:02 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 3039 to sparkExecutor@shaobao17:35431
15/05/08 21:34:02 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 3039 is 138 bytes
15/05/08 21:34:02 INFO TaskSetManager: Finished task 0.0 in stage 18235.0 (TID 15196) in 17 ms on shaobao17 (1/1)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Removed TaskSet 18235.0, whose tasks have all completed, from pool
15/05/08 21:34:02 INFO DAGScheduler: Stage 18235 (print at JavaDirectKafkaWordCount.java:120) finished in 0.017 s
15/05/08 21:34:02 INFO DAGScheduler: Job 9117 finished: print at JavaDirectKafkaWordCount.java:120, took 0.043544 s
15/05/08 21:34:02 INFO SparkContext: Starting job: print at JavaDirectKafkaWordCount.java:120
15/05/08 21:34:02 INFO DAGScheduler: Got job 9118 (print at JavaDirectKafkaWordCount.java:120) with 1 output partitions (allowLocal=true)
15/05/08 21:34:02 INFO DAGScheduler: Final stage: Stage 18237(print at JavaDirectKafkaWordCount.java:120)
15/05/08 21:34:02 INFO DAGScheduler: Parents of final stage: List(Stage 18236)
15/05/08 21:34:02 INFO DAGScheduler: Missing parents: List()
15/05/08 21:34:02 INFO DAGScheduler: Submitting Stage 18237 (ShuffledRDD[21277] at reduceByKey at JavaDirectKafkaWordCount.java:111), which has no missing parents
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(2296) called with curMem=3513203, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12158 stored as values in memory (estimated size 2.2 KB, free 262.1 MB)
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(1702) called with curMem=3515499, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12158_piece0 stored as bytes in memory (estimated size 1702.0 B, free 262.1 MB)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12158_piece0 in memory on shaobao16:52385 (size: 1702.0 B, free: 264.1 MB)
15/05/08 21:34:02 INFO BlockManagerMaster: Updated info of block broadcast_12158_piece0
15/05/08 21:34:02 INFO SparkContext: Created broadcast 12158 from broadcast at DAGScheduler.scala:839
15/05/08 21:34:02 INFO DAGScheduler: Submitting 1 missing tasks from Stage 18237 (ShuffledRDD[21277] at reduceByKey at JavaDirectKafkaWordCount.java:111)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Adding task set 18237.0 with 1 tasks
15/05/08 21:34:02 INFO TaskSetManager: Starting task 0.0 in stage 18237.0 (TID 15197, shaobao17, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12158_piece0 in memory on shaobao17:41922 (size: 1702.0 B, free: 206.7 MB)
15/05/08 21:34:02 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 3039 to sparkExecutor@shaobao17:41112
15/05/08 21:34:02 INFO TaskSetManager: Finished task 0.0 in stage 18237.0 (TID 15197) in 16 ms on shaobao17 (1/1)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Removed TaskSet 18237.0, whose tasks have all completed, from pool
15/05/08 21:34:02 INFO DAGScheduler: Stage 18237 (print at JavaDirectKafkaWordCount.java:120) finished in 0.017 s
15/05/08 21:34:02 INFO DAGScheduler: Job 9118 finished: print at JavaDirectKafkaWordCount.java:120, took 0.021178 s
-------------------------------------------
Time: 1431146042000 ms
-------------------------------------------

15/05/08 21:34:02 INFO JobScheduler: Finished job streaming job 1431146042000 ms.0 from job set of time 1431146042000 ms
15/05/08 21:34:02 INFO JobScheduler: Starting job streaming job 1431146042000 ms.1 from job set of time 1431146042000 ms
15/05/08 21:34:02 INFO SequenceFileRDDFunctions: Saving as sequence file of type (NullWritable,BytesWritable)
15/05/08 21:34:02 INFO SparkContext: Starting job: foreachRDD at JavaDirectKafkaWordCount.java:121
15/05/08 21:34:02 INFO DAGScheduler: Got job 9119 (foreachRDD at JavaDirectKafkaWordCount.java:121) with 2 output partitions (allowLocal=false)
15/05/08 21:34:02 INFO DAGScheduler: Final stage: Stage 18239(foreachRDD at JavaDirectKafkaWordCount.java:121)
15/05/08 21:34:02 INFO DAGScheduler: Parents of final stage: List(Stage 18238)
15/05/08 21:34:02 INFO DAGScheduler: Missing parents: List()
15/05/08 21:34:02 INFO DAGScheduler: Submitting Stage 18239 (MapPartitionsRDD[21279] at foreachRDD at JavaDirectKafkaWordCount.java:121), which has no missing parents
^C15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(127608) called with curMem=3517201, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12159 stored as values in memory (estimated size 124.6 KB, free 261.9 MB)
15/05/08 21:34:02 INFO MemoryStore: ensureFreeSpace(76638) called with curMem=3644809, maxMem=278302556
15/05/08 21:34:02 INFO MemoryStore: Block broadcast_12159_piece0 stored as bytes in memory (estimated size 74.8 KB, free 261.9 MB)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12159_piece0 in memory on shaobao16:52385 (size: 74.8 KB, free: 264.0 MB)
15/05/08 21:34:02 INFO BlockManagerMaster: Updated info of block broadcast_12159_piece0
15/05/08 21:34:02 INFO SparkContext: Created broadcast 12159 from broadcast at DAGScheduler.scala:839
15/05/08 21:34:02 INFO DAGScheduler: Submitting 2 missing tasks from Stage 18239 (MapPartitionsRDD[21279] at foreachRDD at JavaDirectKafkaWordCount.java:121)
15/05/08 21:34:02 INFO TaskSchedulerImpl: Adding task set 18239.0 with 2 tasks
15/05/08 21:34:02 INFO TaskSetManager: Starting task 0.0 in stage 18239.0 (TID 15198, shaobao19, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:02 INFO TaskSetManager: Starting task 1.0 in stage 18239.0 (TID 15199, shaobao19, PROCESS_LOCAL, 1186 bytes)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12159_piece0 in memory on shaobao19:48399 (size: 74.8 KB, free: 206.7 MB)
15/05/08 21:34:02 INFO BlockManagerInfo: Added broadcast_12159_piece0 in memory on shaobao19:38410 (size: 74.8 KB, free: 206.9 MB)
15/05/08 21:34:02 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 3039 to sparkExecutor@shaobao19:52686
15/05/08 21:34:02 INFO MapOutputTrackerMasterActor: Asked to send map output locations

猜你喜欢

转载自hadasione.iteye.com/blog/2209701
今日推荐