Based on the first small project maven build

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/Romantic_sir/article/details/102709455

1, build maven project in the idea, and configure pom file

pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zpark</groupId>
    <artifactId>HelloStorm</artifactId>
    <version>1.0-SNAPSHOT</version>

     <dependencies>
         <!--storm相关jar  -->
         <dependency>
             <groupId>org.apache.storm</groupId>
             <artifactId>storm-core</artifactId>
             <version>1.1.1</version>
             <!-- 运行在本地需要注释下面这句-->
             <scope>provided</scope>
         </dependency>
     </dependencies>
</project>

2, Spout inherited BaseRichSpout

package com.zpark;

import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;

import java.util.Map;
import java.util.Random;

public class Spout extends BaseRichSpout {
    private SpoutOutputCollector spoutOutputCollector;
    private static String[] words = {"Hadoop","Storm","Apache","Linux","Nginx","Tomcat","Spark"};

    public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {

        this.spoutOutputCollector = spoutOutputCollector;

    }

    public void nextTuple() {
            String word = words[new Random().nextInt(words.length)];

            spoutOutputCollector.emit(new Values(word));


    }

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        System.out.println("定义格式...");
        declarer.declare(new Fields("test"));

    }
}

3, Bolt inherited BaseRichBolt

package com.zpark;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;

import java.util.Map;

public class Bolt extends BaseRichBolt {

    public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
        System.out.println("==========打印=========");
    }


    public void execute(Tuple tuple) {
        String word = (String) tuple.getValue(0);
        String out = "Hello " + word + "!";
        System.out.println(out);

    }


    public void declareOutputFields(OutputFieldsDeclarer declarer) {


    }

}

4, Topology category

package com.zpark;

import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.topology.TopologyBuilder;

public class App {
    public static void main(String[] args)  {

        //定义一个拓扑
        TopologyBuilder builder=new TopologyBuilder();
        //设置一个Executeor(线程),默认一个
        builder.setSpout("send", new Spout());
        //设置一个Executeor(线程),和一个task
        builder.setBolt("deal", new Bolt(),1).setNumTasks(1).shuffleGrouping("send");
        Config conf = new Config();
        conf.put("send", "send");
        try{
            //运行拓扑
            if(args !=null&&args.length>0){ //有参数时,表示向集群提交作业,并把第一个参数当做topology名称
                System.out.println("远程模式");
                StormSubmitter.submitTopology(args[0], conf, builder.createTopology());
            } else{//没有参数时,本地提交
                //启动本地模式
                System.out.println("本地模式");
                LocalCluster cluster = new LocalCluster();
                cluster.submitTopology("work" ,conf,  builder.createTopology() );
                Thread.sleep(60000);
                //  关闭本地集群
                cluster.shutdown();
            }
        }catch (Exception e){
            e.printStackTrace();
        }
    }
}

5, run the program, which the console will appear the following data

  

6, then we will project into the Storm cluster of servers running, Storm here do not put a jar add to the mix.

7, we will jar package uploaded to the linux storm installation directory, then this time performed in the master node storm installation directory: bin / storm nimbus & execute bin / storm supervisor & start the entire cluster of storm service in the node directory from respectively may also be performed bin / storm ui & start UI management interface is more intuitive to see the results, of course, does not start for stand-alone environment or services can start storm, this time, execute the following command to run the program for this project

Here is the main method called App class, then if the program parameters have been processed, the latter can also keep up with the parameters, press Enter to perform, the system will work to initialize the cluster, the task started after a few seconds, the implementation process scroll output at a time is as follows:

Here, develop and test run the first Storm entry projects are finished, more complex calculation logic model is basically the same, mainly Maven project appeared more complex modules and calls, in fact, the entire operation of the process are similar and now even into the Storm streaming calculated hall door, the next exciting also need to slowly realize

Guess you like

Origin blog.csdn.net/Romantic_sir/article/details/102709455