storm应用入门(一)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u011311291/article/details/85773269

一.Storm是一种实时流计算框架
具体的表现形式可以从它的组件中看出:
Spout:数据来源
Bolt:处理点
总体来说就是Spout不断的提供数据,而Bolt不断的处理数据,这就形成了数据处理流。

二.下面以单词计数为例子:
SentenceSpout(Spout,产生句子)->SplitSentenceBolt(Bolt,对句子进行切割)->WordCountBolt(Bolt,对切割的单词进行计数)->ReportBolt(Bolt,输出计数结果)
整个SentenceSpout->SplitSentenceBolt->WordCountBolt->ReportBolt流水线就构成了一个概念,Topology拓扑。
SentenceSpout.java

package com.zte.StormTest;

import java.util.Map;

import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;


public class SentenceSpout extends BaseRichSpout
{
	private static final long serialVersionUID = -2521640424426565301L;
	
	private SpoutOutputCollector collector;
	private String[] sentences = {
			"my dog has fleas",
			"i like cold beverages",
			"the dog ate my homework",
			"don't have a cow man",
			"i don't think i like fleas"
	};                          
	private int index = 0;
	                 
	@Override
	public void nextTuple() {
		this.collector.emit(new Values(sentences[index]));
		index++;
		if(index >= sentences.length)
		{ 
			index=0;
		}
	}
	
	//所有Spout组件在初始化的时候调用这个方法
	//Map包含了Storm的配置信息
	//TopologyContext提供了topology中的组件信息,例如当前组件ID等
	//SpoutOutputCollector发射tuple的方法
	@Override
	public void open(Map config, TopologyContext context, SpoutOutputCollector collector) {
		this.collector  = collector;
	}

	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		 declarer.declare(new Fields("sentence"));
	}
	
}

SplitSentenceBolt.java

package com.zte.StormTest;

import java.util.Map;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;


public class SplitSentenceBolt extends BaseRichBolt
{
	private static final long serialVersionUID = 5516446565262406488L;
	
	private OutputCollector collector;
	
	@Override
	public void execute(Tuple tuple) 
	{
		String sentence = tuple.getStringByField("sentence");
		String[] words = sentence.split(" ");
		for(String word : words)
		{
			this.collector.emit(new Values(word));
		}
	}

	//在bolt初始化的时候调用,可以用来准备bolt用到的资源,例如数据库连接等
	@Override
	public void prepare(Map config, TopologyContext context, OutputCollector collector) 
	{
		this.collector = collector;
	}

	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) 
	{
		declarer.declare(new Fields("word"));
	}
}

WordCountBolt.java

package com.zte.StormTest;

import java.util.HashMap;
import java.util.Map;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;


public class WordCountBolt extends BaseRichBolt
{
	private static final long serialVersionUID = 3533537921679412895L;
	
	private OutputCollector collector;
	private HashMap<String,Long> counts = null;
	
	@Override
	public void execute(Tuple tuple) 
	{
		String word = tuple.getStringByField("word");
		Long count = this.counts.get(word);
		if(count == null)
		{
			count = 0L;
		}
		count++;
		this.counts.put(word, count);
		this.collector.emit(new Values(word,count));
		System.out.println("word:"+word+" count:"+count);
	}

	@Override
	public void prepare(Map config, TopologyContext context, OutputCollector collector) 
	{
		this.collector = collector;
		this.counts = new HashMap<String,Long>();
	}

	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) 
	{
		declarer.declare(new Fields("word","count"));
	}
}

WordCountTopology.java

package com.zte.StormTest;

import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;

public class WordCountTopology 
{
	private static final String SENTENCE_SPOUT_ID = "sentence-spout";
	private static final String SPLIT_BOLT_ID = "split-bolt";
	private static final String COUNT_BOLT_ID = "count-bolt";
	private static final String REPORT_BOLT_ID = "report-bolt";
	private static final String TOPOLOGY_NAME = "word-count-topology";
	
	public static void main(String[] args) throws Exception
	{
		SentenceSpout spout = new SentenceSpout();
		SplitSentenceBolt splitBolt = new SplitSentenceBolt();
		WordCountBolt countBolt = new WordCountBolt();
		ReportBolt reportBolt = new ReportBolt();
		
		TopologyBuilder builder = new TopologyBuilder();
		builder.setSpout(SENTENCE_SPOUT_ID, spout);
		builder.setBolt(SPLIT_BOLT_ID, splitBolt).shuffleGrouping(SENTENCE_SPOUT_ID);
		builder.setBolt(COUNT_BOLT_ID, countBolt).fieldsGrouping(SPLIT_BOLT_ID, new Fields("word"));
		builder.setBolt(REPORT_BOLT_ID,reportBolt).globalGrouping(COUNT_BOLT_ID);
		
		Config config = new Config();
		
		//本地运行
		LocalCluster cluster = new LocalCluster();
		cluster.submitTopology(TOPOLOGY_NAME, config, builder.createTopology());
		//本地运行在关闭的时候最好加个sleep,因为关闭组件需要一些时间,才能看到计数的输出效果
		Thread.sleep(5000); 
		cluster.killTopology(TOPOLOGY_NAME);
		Thread.sleep(30000); 
		cluster.shutdown();
		
		//正式部署到storm集群中使用StormSubmitter.submitTopology
//		StormSubmitter.submitTopology(TOPOLOGY_NAME,config, builder.createTopology());
		
		
	}
}

pom.xml

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.zte.apt</groupId>
	<artifactId>StormTest</artifactId>
	<version>0.0.1-SNAPSHOT</version>

	<name>StormTest</name>
	<!-- FIXME change it to the project's website -->
	<url>http://www.example.com</url>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<maven.compiler.source>1.8</maven.compiler.source>
		<maven.compiler.target>1.8</maven.compiler.target>
	</properties>

	<dependencies>
		<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-core</artifactId>
    <version>1.1.1</version>
    <scope>provided</scope>
</dependency>

	</dependencies>

	<build>
		<pluginManagement><!-- lock down plugins versions to avoid using Maven 
				defaults (may be moved to parent pom) -->
			<plugins>
				
			</plugins>
		</pluginManagement>
	</build>
</project>

三.storm基本概念
1.Nodes(服务器),配置在Storm集群中的服务器,一个集群可以包括一个或者多个工作node
2.Workers(JVM虚拟机,进程),指一个node上相互独立运行的JVM进程,每个node可以配置运行一个或者多个worker,每一个worker只能绑定到一个topology
设置工作进程数,比如Config.setNumWorkers(3)
3.Executer(线程),指一个worker的jvm进程中运行的java线程,多个Task可以指派给同一个executer,默认Storm会给每一个Executer分配一个Task
设置线程数,比如builder.setBolt(SPLIT_BOLT_ID, splitBolt,2)
4.Task(bolt/spout实例),task是spout和bolt实例,它们的nextTuple()和executer()方法会被executor线程调用执行。
设置任务Task数builder.setBolt(SPLIT_BOLT_ID, splitBolt,2).setNumTasks(4);

四.数据的分组策略
1.Shuffle grouping 随机分发tuple,发出多少个,bolt所有线程收到的总数就是多少个
2.Fields grouping 按字段分组,按照指定的字段组合值进行tuple的分发,如果值相同,tuple始终分发同一个bolt
比如有在单词计数的时候,固定的a->bolt1,b->bolt2,c->bolt3,d->bolt1.
3.All grouping 全复制分组,每一个bolt都会接收到一个tuple的副本,比如发出10个,每个bolt的都会接收到10个
4.Direct Grouping 指向性分组,数据源(Spout/blot)会调用emitDirect方法来判断一个tuple应该由哪个Storm组件来接收,只能在生命了指向型数据流上使用。
比如Spout指定xxx数据只能由TaskID=4的bolt来处理
5.Globle grouping全局分组 所有的tuple都会发送给具有最小taskID的bolt,也就是说并发度对该设置没有效果。
6.None grouing不分组,其实和随机分组相同
7.CustomStreamGrouping 实现自定义分组

五.storm运行
1.在本地运行,使用LocalCluster,然后直接在eclipse中运行几个
2.在集群上运行,使用StormSubmitter.submitTopology,然后将工程打包,不需要将storm依赖包一起打包,然后使用以下命令运行即可:
bin/storm jar WordCount.jar com.zte.StormTest.WordCountTopology

六.storm安装
确保环境安装了JDK1.8
1.安装zookeeper
下载zookeeper包,解压
(1)先设置配置文件
将conf目录下的zoo_sample.cfg更名为zoo.cfg,默认端口为2181
(2)使用bin/zkServer.sh start 启动zookeeper
2.安装storm
解压缩包
(1)bin目录是启动相关
(2)conf目录是配置相关,其中storm.yml为配置项,里面有包含配置zookeeper的配置项,默认为localhost
可以在

storm.zookeeper.servers:
   - "storm-01.test.com(主机名或者IP,10.42.27.1)"
   - "storm-02.test.com"
   - "storm-03.test.com"

nimbus.seeds 可以配置主服务器
所有配置完以后然后也是通过直接拷贝整个storm文件夹都其它的服务器
(3)启动主节点 bin/storm nimbus &
(4)启动从节点 bin/storm supervisor &
(5)启动UI界面 bin/storm ui &
(6)启动日志查看进程 bin/storm logviewer &
然后使用ip:8080/index.html 访问UI界面 192.168.1.104:8080/index.html

猜你喜欢

转载自blog.csdn.net/u011311291/article/details/85773269