(04) Storm and Kafka combined with simple cases

  Storm basic concepts of the foregoing, the Storm Spout mentioned should be taken a steady stream of data, without interruption. So, obviously, the message queuing system, distributed memory systems or memory database as its data source is a good choice. Therefore, Storm release package also includes an integrated jar, support from kafka read data for Storm applications. The following records about the use of the process.

  Benpian on an essay Storm Case programming improvements based on

  1, additional jar package

  Added storm-kafka-0.9.2-incubating.jar in java project, the jar can be obtained from the apache-storm-0.9.2-incubating / external / storm-kafka

  2, modifying classes assembly

 1 package demo;
 2 
 3 import java.util.UUID;
 4 
 5 import backtype.storm.Config;
 6 import backtype.storm.StormSubmitter;
 7 import backtype.storm.generated.StormTopology;
 8 import backtype.storm.spout.SchemeAsMultiScheme;
 9 import backtype.storm.topology.TopologyBuilder;
10 import storm.kafka.BrokerHosts;
11 import storm.kafka.KafkaSpout;
12 import storm.kafka.SpoutConfig;
13 import storm.kafka.StringScheme;
14  Import storm.kafka.ZkHosts;
 15  
16  // assembly of the various components, Storm and submit tasks to the cluster 
. 17  public  class SubmitClient {
 18 is  
. 19      public  static  void main (String [] args) throws Exception {
 20 is          // get a topology structure device 
21 is          TopologyBuilder Builder = new new TopologyBuilder ();
 22 is          // specified our Spout 
23 is          builder.setSpout ( "DataSource-Spout" , createKafkaSpout ());
 24          // specified Bolt assembly, also need to specify the data source 
25          builder.setBolt ( "boltA",new new MyBoltA ()) shuffleGrouping. ( "DataSource-Spout" );
 26 is          builder.setBolt ( "boltB", new new MyBoltB ()) shuffleGrouping ( "Bolta." );
 27          // generate a specific task 
28          StormTopology phoneTopo = Builder .createTopology ();
 29          // number of parameters specified tasks 
30          config config = new new config ();
 31 is          // distribution desired cluster 6 storm worker to perform a task 
32          config.setNumWorkers (. 6 );
 33 is          // submit task 
34 is          StormSubmitter .submitTopology ( "mystormdemo" , config, phoneTopo);
 35     }
 36  
37 [      // support Kakfa read from the data message system 
38 is      Private  static KafkaSpout createKafkaSpout () {
 39          BrokerHosts brokerHosts = new new ZkHosts ( "192.168.7.151:2181,192.168.7.152:2181,192.168.7.153:2181" );
 40          SpoutConfig spoutConfig = new new SpoutConfig (brokerHosts, "mydemo1", "/ mydemo1" , UUID.randomUUID () toString ().);
 41 is          spoutConfig.scheme = new new SchemeAsMultiScheme ( new new StringScheme ());
 42 is          // returns a KafkaSpout 
43 is          return  new new KafkaSpout (spoutConfig);
44     }
45 }

  Line 23 is designated spout kafka, and new methods createKafkaSpout

  3, file packaging, sending server

  These four files into stormDemo.jar, and uploaded to the server Storm, temporarily stored in / usr / local / test / storm

  4, the server adds an extra jar package

  Add kafka jar packet associated with the service to Storm, position kafka_2.9.2-0.8.1.1 / libs added to the apache-storm-0.9.2-incubating / lib

[root@localhost storm-kafka]# cp storm-kafka-0.9.2-incubating.jar /usr/local/apache-storm-0.9.2-incubating/lib/
[root@localhost libs]# cp kafka_2.9.2-0.8.1.1.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp scala-library-2.9.2.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp metrics-core-2.2.0.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp snappy-java-1.0.5.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp zkclient-0.3.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp log4j-1.2.15.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp slf4j-api-1.7.2.jar /usr/local/apache-storm-0.9.2-incubating/lib/ [root@localhost libs]# cp jopt-simple-3.2.jar /usr/local/apache-storm-0.9.2-incubating/lib/

  5, start the relevant program

  First start Zookeeper, Kafka, Storm service, and then start the producer client kafka respectively refer to the previous essays

  Zookeeper cluster installation

  kafka single multi-Broker (pseudo-distributed) basic configuration

  Stand-alone installation configuration Storm

  java program connects exemplary kafka

  Then execute the following command to submit tasks:

[root@localhost apache-storm-0.9.2-incubating]# bin/storm jar /usr/local/test/storm/stormDemo.jar demo.SubmitClient

 

Guess you like

Origin www.cnblogs.com/javasl/p/12315060.html