一个可靠的storm wordcount实现

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq447995687/article/details/84780450

可靠的wordcount

1.实现storm的可靠性api

要实现可靠的api大致需要实现以下步骤:

  • 实现spout的ack和fail方法
  • 在spout发射的时候将发射的tuple与一个唯一的messageId进行绑定
  • 在bolt发射新tuple的时候将当期tuple与发射的新tuple进行锚定
  • bolt处理失败调用collector.fail,成功调用collector.ack

2.实现一个可靠的wordcount

2.1 自定义的单词发射器

一个可记录目前发射的所有单词个数,与之前的普通wordcount实现类似

public class SentenceEmitter {
    private AtomicLong atomicLong = new AtomicLong(0);

    private final AtomicLongMap<String> CONUTS = AtomicLongMap.create();

    private final String[] SENTENCES = {"The logic for a realtime application is packaged into a Storm topology",
            " A Storm topology is analogous to a MapReduce job ",
            "One key difference is that a MapReduce job eventually finishes ",
            "whereas a topology runs forever or until you kill it of course ",
            "A topology is a graph of spouts and bolts that are connected with stream groupings"};


    /**
     * 随机发射sentence,并记录单词数量,该统计结果用于验证与storm的统计结果是否相同。
     * 当发射总数<1000时,停止发射,以便程序在停止时,其它bolt能将发射的数据统计完毕
     *
     * @return
     */
    public String emit() {
        int randomIndex = (int) (Math.random() * SENTENCES.length);
        String sentence = SENTENCES[randomIndex];
        for (String s : sentence.split(" ")) {
            CONUTS.incrementAndGet(s);
        }
        return sentence;
    }

    public void printCount() {
        System.out.println("--- Emitter COUNTS ---");
        List<String> keys = new ArrayList<String>();
        keys.addAll(CONUTS.asMap().keySet());
        Collections.sort(keys);
        for (String key : keys) {
            System.out.println(key + " : " + this.CONUTS.get(key));
        }
        System.out.println("--------------");
    }

    public AtomicLongMap<String> getCount() {
        return CONUTS;
    }

    public static void main(String[] args) {
        SentenceEmitter sentenceEmitter = new SentenceEmitter();
        for (int i = 0; i < 20; i++) {
            System.out.println(sentenceEmitter.emit());
        }
        sentenceEmitter.printCount();
    }
}

可以随机的发射单词,并计数,打印方法,用于打印出计数。

2.2 可靠的spout实现SentenceSpout

这里重写了BaseRichSpout 的ack和fail方法。ConcurrentHashMap<UUID, Values> emitted用于缓存目前发送的所有tuple。nextTuple() 方法当发送的单词数量达到1000时停止向后继续发送便于统计,同时生成一个uuid与当前的tuple对应在发射的时候放入emitted,同时在发射的时候将uuid一并发射出去。ack方法调用时说明该tuple的所有下游tuple均处理成功,此时从缓存emitted移除该tuple。fail方法与ack方法对应,说明该tuple处理失败需要重发再处理,此时从缓存中取出失败的uuid对应tuple从新发送。当关闭topology的时候调用colse方法打印spout发送出的所有数据。

public class SentenceSpout extends BaseRichSpout {

    private static final long serialVersionUID = -5335326175089829338L;
    private static final Logger LOGGER = Logger.getLogger(WordSplitBolt.class);

    private AtomicLong atomicLong = new AtomicLong(0);
    private SpoutOutputCollector collector;
    private SentenceEmitter sentenceEmitter;
    private ConcurrentHashMap<UUID, Values> emitted;

    @Override
    public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
        this.collector = collector;
        this.sentenceEmitter = new SentenceEmitter();
        this.emitted = new ConcurrentHashMap<UUID, Values>();
    }

    @Override
    public void nextTuple() {
        //这里做了一点更改,不再在sentenceEmitter里面睡眠,这样会影响到spout线程程序发送失败的数据。
        if (atomicLong.incrementAndGet() >= 1000) {
            return;
        }
        String sentence = sentenceEmitter.emit();
        Values values = new Values(sentence);
        UUID msgId = UUID.randomUUID();

        //在spout发射的时候为每一个tuple指定一个id,这个 id 会在后续过程中用于识别 tuple
        collector.emit(values, msgId);
        //将所有发射出去的sentence记录下来,以便在失败时重新发射
        this.emitted.put(msgId, values);
    }

    /**
     * 要保证可靠性,必须实现ack和fail方法
     * 调用ack表示下游全部成功处理,此时需要从emitted移除已经ack的tuple
     *
     * @param msgId
     */
    @Override
    public void ack(Object msgId) {
        this.emitted.remove(msgId);
    }

    /**
     * 要保证可靠性,必须实现ack和fail方法
     * 调用fail表示下游某个环节处理失败,可能是程序异常,也可能是网络原因,此时需要从emitted获取失败的tuple,然后重新发送
     *
     * @param msgId
     */
    @Override
    public void fail(Object msgId) {
        Values values = this.emitted.get(msgId);
        this.collector.emit(values, msgId);
        LOGGER.info(String.format("失败重发:messageId:%s,values:%s", msgId, values));
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("sentence"));
    }

    @Override
    public void close() {
        super.close();
        sentenceEmitter.printCount();
    }
}

2.3 单词分隔bolt

在该bolt里,execute方法进行了一点小处理,用于模拟异常情况,同时在emit的时候使用collector.emit(input, new Values(word))将当前tuple与发射的tuple进行锚定,同时发射成功后ack该tuple,通知spout处理成功将该tuple从缓存移除;失败的时候调用fail方法,通知spout处理失败,重新发送该tuple。

public class WordSplitBolt extends BaseRichBolt {
    private static final long serialVersionUID = 2932049413480818649L;
    private static final Logger LOGGER = Logger.getLogger(WordSplitBolt.class);
    private OutputCollector collector;

    private AtomicInteger atomicInteger = new AtomicInteger(1);

    @Override
    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        this.collector = collector;
    }

    @Override
    public void execute(Tuple input) {
        try {

            atomicInteger.getAndIncrement();

            String sentence = input.getStringByField("sentence");

            if (atomicInteger.get() == 20 || atomicInteger.get() == 200) {
               throw new RuntimeException(String.format("模拟异常情况,sourceStreamId:%s,messageId:%s,sentence:%s", input.getSourceStreamId(), input.getMessageId(), sentence));
            }

            String[] words = sentence.split(" ");
            for (String word : words) {
                //发射的时候锚定该tuple
                collector.emit(input, new Values(word));
            }
            //当处理成功时ack该Tuple
            this.collector.ack(input);
            LOGGER.info("--sentence--" + sentence);
        } catch (Exception e) {
            //处理失败调用fail方法
            collector.fail(input);
            LOGGER.error(e.getMessage());
        }
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word"));
    }
}

2.4 单词计数bolt

与分隔bolt类似

public class WordCountBolt extends BaseRichBolt {
    private static final long serialVersionUID = -7753338296650445257L;
    private static final Logger LOGGER = Logger.getLogger(WordCountBolt.class);
    private OutputCollector collector;
    private HashMap<String, Long> counts = null;

    @Override
    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        this.collector = collector;
        this.counts = new HashMap<String, Long>();
    }

    @Override
    public void execute(Tuple input) {
        String word = input.getStringByField("word");
        Long count = this.counts.get(word);
        if (count == null) {
            count = 0L;
        }
        count++;
        counts.put(word, count);
        //同样发射的时候锚定该tuple
        collector.emit(input, new Values(word, count));
        //当处理成功时ack该Tuple
        collector.ack(input);
        LOGGER.info("--word--" + word);
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word", "count"));
    }
}

2.5 打印结果

没什么好说的成功调用ack,killTopology打印storm单词计数

public class ReportBolt extends BaseRichBolt {
    private static final Logger LOGGER = Logger.getLogger(ReportBolt.class);

    private static final long serialVersionUID = -3973016696731250995L;
    private HashMap<String, Long> counts = null;
    private OutputCollector collector;


    @Override
    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        counts = new HashMap<>();
        this.collector = collector;
    }

    @Override
    public void execute(Tuple input) {
        String word = input.getStringByField("word");
        Long count = input.getLongByField("count");
        this.counts.put(word, count);
        //当处理成功时ack该Tuple
        collector.ack(input);
        LOGGER.info("--globalreport--" + word);
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {

    }

    @Override
    public void cleanup() {
        System.out.println("--- FINAL COUNTS ---");
        List<String> keys = new ArrayList<String>();
        keys.addAll(this.counts.keySet());
        Collections.sort(keys);
        for (String key : keys) {
            System.out.println(key + " : " + this.counts.get(key));
        }
        System.out.println("--------------");
    }
}

3. 运行并分析结果

3.1 运行topology

splitBolt并发度为3,随机分组
countBolt并发度为4,字段分组
reportBolt全局分组
注意config.setNumAckers(1)这里设置了一个acker,Storm 的拓扑有一些特殊的称为“acker”的任务,这些任务负责跟踪每个 Spout 发出的 tuple 的 DAG。当一个 acker 发现一个 DAG 结束了,它就会给创建 spout tuple 的 Spout 任务发送一条消息,让这个任务来应答这个消息。Storm 默认会将 acker 的数量设置为一,不过如果你有大量消息的处理需求,你可能需要增加这个数量。

public class GuaranteedWordCountTopology {

    private static final String SENTENCE_SPOUT_ID = "sentence-spout";
    private static final String SPLIT_BOLT_ID = "split-bolt";
    private static final String COUNT_BOLT_ID = "count-bolt";
    private static final String REPORT_BOLT_ID = "report-bolt";
    private static final String TOPOLOGY_NAME = "word-count-topology";

    public static void main(String[] args) throws Exception {

        SentenceSpout spout = new SentenceSpout();
        WordSplitBolt splitBolt = new WordSplitBolt();
        WordCountBolt countBolt = new WordCountBolt();
        ReportBolt reportBolt = new ReportBolt();


        TopologyBuilder builder = new TopologyBuilder();

        builder.setSpout(SENTENCE_SPOUT_ID, spout);
        //将生成的sentence随机分组,然后发射出去
        builder.setBolt(SPLIT_BOLT_ID, splitBolt,3) .shuffleGrouping(SENTENCE_SPOUT_ID);

        //splitBolt按照空格后分隔sentence为word,然后发射给countBolt
        builder.setBolt(COUNT_BOLT_ID, countBolt, 4).fieldsGrouping(SPLIT_BOLT_ID, new Fields("word"));

        // WordCountBolt --> ReportBolt
        builder.setBolt(REPORT_BOLT_ID, reportBolt,1).globalGrouping(COUNT_BOLT_ID);

        Config config = new Config();
        config.setNumWorkers(1);
        config.setNumAckers(1);
        LocalCluster cluster = new LocalCluster();

        cluster.submitTopology(TOPOLOGY_NAME, config, builder.createTopology());
        Thread.sleep(30*1000);
        cluster.killTopology(TOPOLOGY_NAME);
        cluster.shutdown();
    }
}

3.2 结果分析

单词计数器打印的计数信息如下:

--- Emitter COUNTS ---
 : 203
A : 404
MapReduce : 391
One : 188
Storm : 391
The : 188
a : 1187
analogous : 203
and : 201
application : 188
are : 201
bolts : 201
connected : 201
course : 219
difference : 188
eventually : 188
finishes : 188
for : 188
forever : 219
graph : 201
groupings : 201
into : 188
is : 780
it : 219
job : 391
key : 188
kill : 219
logic : 188
of : 420
or : 219
packaged : 188
realtime : 188
runs : 219
spouts : 201
stream : 201
that : 389
to : 203
topology : 811
until : 219
whereas : 219
with : 201
you : 219
--------------

storm统计结果ReportBolt打印的结果如下:

--- FINAL COUNTS ---
 : 203
A : 404
MapReduce : 391
One : 188
Storm : 391
The : 188
a : 1187
analogous : 203
and : 201
application : 188
are : 201
bolts : 201
connected : 201
course : 219
difference : 188
eventually : 188
finishes : 188
for : 188
forever : 219
graph : 201
groupings : 201
into : 188
is : 780
it : 219
job : 391
key : 188
kill : 219
logic : 188
of : 420
or : 219
packaged : 188
realtime : 188
runs : 219
spouts : 201
stream : 201
that : 389
to : 203
topology : 811
until : 219
whereas : 219
with : 201
you : 219
--------------

对比结果发现即使在分隔单词bolt里面抛出了异常,通过可靠的机制最后两者统计的结果完全一致,说明可靠机制是可行的。但是可靠机制不能保证tuple被恰好一次处理,程序异常可能在任何时候发生,当保存数据成功后返回结果超时,这时再重新发送失败的tuple会导致重复处理。但可靠机制保证的该tuple至少能被处理一次。要想唯一一次处理需要使用storm的事务性spout,现在已经有trident实现。

3.3 异常重发日志分析

在spout和splitbolt对异常进行了日志记录,下面来看看该日志,有助于加深对失败重发的理解,结果如下:

[Thread-30-split-bolt-executor[9 9]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{-897755601482291185=1436976737464555631},sentence:whereas a topology runs forever or until you kill it of course 
[Thread-36-split-bolt-executor[10 10]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{1355559750003096469=8699297777170365670},sentence:One key difference is that a MapReduce job eventually finishes 
[Thread-36-split-bolt-executor[10 10]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{-6711225109969092935=5915661248356017371},sentence:A topology is a graph of spouts and bolts that are connected with stream groupings
[Thread-18-split-bolt-executor[8 8]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{-692291341444179168=-4371551265892401171},sentence: A Storm topology is analogous to a MapReduce job 
[Thread-30-split-bolt-executor[9 9]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{-7981624255431253628=6508027834482650707},sentence:A topology is a graph of spouts and bolts that are connected with stream groupings
[Thread-18-split-bolt-executor[8 8]] com.foo.bolt.WordSplitBolt [54] - 模拟异常情况,sourceStreamId:default,messageId:{-9071636350309294887=-1076991837303452874},sentence:A topology is a graph of spouts and bolts that are connected with stream groupings

[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:239b1155-0eec-4a11-a8aa-3368ab947cee,values:[whereas a topology runs forever or until you kill it of course ]
[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:bae350f5-8548-4e6b-852d-7280d03e0ea2,values:[One key difference is that a MapReduce job eventually finishes ]
[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:22821cab-9fd0-4532-b731-2b995b98a381,values:[ A Storm topology is analogous to a MapReduce job ]
[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:1195ba50-d2f3-4cea-9eab-e426ffbb5bf9,values:[A topology is a graph of spouts and bolts that are connected with stream groupings]
[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:42d14bdc-8e42-4eb6-81ec-3496a9dc855d,values:[A topology is a graph of spouts and bolts that are connected with stream groupings]
[Thread-22-sentence-spout-executor[7 7]] com.foo.bolt.WordSplitBolt [76] - 失败重发:messageId:20f77240-21bb-4e65-aee7-6e2918857a63,values:[A topology is a graph of spouts and bolts that are connected with stream groupings]

当然日志当中出现的顺序并不是这样。通过日志发现失败的tuple确实都进行了重发,而在splitbolt中两种情况下会出现异常,由于splitbolt设置的并发度为3所以总共出现了6次异常。

ps:完整代码在:https://github.com/Json-Lin/storm-practice/tree/master/guaranteed-word-count

猜你喜欢

转载自blog.csdn.net/qq447995687/article/details/84780450