Storm基础入门

持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第5天,点击查看活动详情

一、概念介绍

1、storm是大数据量的实时流计算
2、特点

  • 支持各种实时类的场景:实时处理消息及更新数据库;对实时的数据流进行查询或计算,同时将最新的结果推送给客户端显示;对耗时的查询进行并行化,基于分布式RPC调用。
  • 高度的可伸缩性:好扩容,加机器,调并行度就可以了
  • 数据不丢失的保障
  • 超强的健壮性
  • 使用便捷性:核心语义非常简单

3、运算流程

微信截图_20191020103909.png

4、名词介绍

  • 并行度:就是task,每个spout/bolt代码副本都会运行在一个task中
  • 流分组:task与task之间的数据流向的关系
    • 流分组策略:Shuffle Grouping:随机发射
    • Fields Grouping:根据一个或多个字段发射

5、入门示例

package com.mmc.storm;

import lombok.extern.slf4j.Slf4j;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.generated.AlreadyAliveException;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.generated.InvalidTopologyException;
import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

/**
 * @description: 单词统计
 * @author: mmc
 * @create: 2019-10-21 22:47
 **/
@Slf4j
public class WordCountTopology {

    /**
     * 负责从数据源获取数据
     */
    public static class RandomSentenceSpout extends BaseRichSpout{

        private SpoutOutputCollector collector;

        private Random random;

        @Override
        public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
            this.collector=spoutOutputCollector;
            this.random=new Random();
        }

        /**
         * 它会运行在task中,也就是说task会不断的循环调用它,就可以不断的发射新的数据,形成一个数据流
         */
        @Override
        public void nextTuple() {
            try {
                Thread.sleep(100);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            String[] sentences = new String[]{"the cow jumped over the moon", "an apple a day keeps the doctor away",
                    "four score and seven years ago", "snow white and the seven dwarfs", "i am at two with nature"};
            String sentence=sentences[random.nextInt(sentences.length)];
            log.info("发送一段句子:"+sentence);
            //这个Values,你可以理解为是构建一个Tuple,tuple是最小的数据单位
            collector.emit(new Values(sentence));
        }

        /**
         * 定义发送出去的tuple的字段的名称
         * @param outputFieldsDeclarer
         */
        @Override
        public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
            outputFieldsDeclarer.declare(new Fields("sentence"));
        }
    }


    /**
     * 每一个Bolt代码也是发送到task里面去运行
     */
    public static class SplientSentence extends BaseRichBolt{

        private OutputCollector collector;

        @Override
        public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
            this.collector=outputCollector;
        }

        /**
         * 每接受到一条数据都会交给execute方法去处理
         * @param tuple
         */
        @Override
        public void execute(Tuple tuple) {
            String sentence=tuple.getStringByField("sentence");
            String[] words=sentence.split(" ");
            for (String word:words){
                collector.emit(new Values(word));
            }
        }

        @Override
        public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
            outputFieldsDeclarer.declare(new Fields("word"));
        }
    }


    public static class WordCount extends BaseRichBolt{

        private OutputCollector collector;

        private Map<String,Long> wordCountMap=new HashMap<>();

        @Override
        public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
            this.collector=outputCollector;
        }

        @Override
        public void execute(Tuple tuple) {
            String word=tuple.getStringByField("word");
            Long count=wordCountMap.get(word);
            if(count==null){
                count=1L;
            }else {
                count++;
            }
            wordCountMap.put(word,count);
            log.info("【单词计数:】{}出现的次数是{}",word,count);
            collector.emit(new Values(word,count));

        }

        @Override
        public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
            outputFieldsDeclarer.declare(new Fields("word","count"));
        }
    }

    public static void main(String[] args) throws InterruptedException {
        //将Spolt和Bolts组合起来,形成一个拓扑
        TopologyBuilder builder=new TopologyBuilder();
        builder.setSpout("RandomSentence",new RandomSentenceSpout(),2);
        builder.setBolt("SplitSentence",new SplientSentence(),5).setNumTasks(10).shuffleGrouping("RandomSentence");
        builder.setBolt("WordCount",new WordCount(),10).setNumTasks(20).
                fieldsGrouping("SplitSentence",new Fields("word"));
        Config config=new Config();

        //命令行执行
        if(args!=null&&args.length>0){
            config.setNumWorkers(3);
            try {
                StormSubmitter.submitTopology(args[0],config,builder.createTopology());

            } catch (AlreadyAliveException e) {
                e.printStackTrace();
            } catch (InvalidTopologyException e) {
                e.printStackTrace();
            } catch (AuthorizationException e) {
                e.printStackTrace();
            }
        }else {
            config.setMaxTaskParallelism(20);
            LocalCluster cluster=new LocalCluster();
            cluster.submitTopology("WordCountTopology",config,builder.createTopology());
            Thread.sleep(60000);
            cluster.shutdown();
        }
    }
}

复制代码


二、集群部署

  1. 下载storm
    下载地址:www.apache.org/dyn/closer.…
  2. 配置环境变量
    vi ~/.bashrc
export STORM_HOME=/usr/local/storm
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$STORM_HOME/bin
复制代码

source ~/.bashrc 3. 修改配置
打开storm/conf/storm.yaml,增加如下配置

storm.zookeeper.servers:
     - "192.168.1.12"
     - "192.168.1.13"
     - "192.168.1.14"

nimbus.seeds: ["192.168.1.12"]

storm.local.dir: "/var/storm"
 
supervisor.slots.ports:
   - 6700
   - 6701
   - 6702
   - 6703
复制代码

4、创建文件夹

mkdir /var/storm
复制代码

5、启动

  • 先启动zookeeper
  • 一个节点启动 storm nimbus >/dev/null 2>&1 &
  • 三个节点都执行 storm supervisor >/dev/null 2>&1 &
  • 一个节点 storm ui>/dev/null 2>&1 &
  • 两个supervisor节点 storm logviewer >/dev/null 2>&1 &

6、关闭 storm kill wordCountTopology

猜你喜欢

转载自juejin.im/post/7149900895164039204