Storm launch configuration

1. Install Storm
    wget http://www.apache.org/dyn/closer.lua/storm/apache-storm-1.0.3/apache-storm-1.0.3.tar.gz
    tar xzvf ./apache-storm- 1.0.3.tar.gz

Second, create a new data folder mkdir data; view the directory: pwd
      prepares the local directory for strom configuration.
      storm.local.dir: "/opt/apache-storm-1.0.3 /data"[storm local directory] 3. Configure the root directory conf/storm.yaml configuration file Note that before the configuration letter / after the colon, start with a space: final The configuration is as follows:





quote

storm.zookeeper.servers:
- "master"
- "slave"

drpc.servers:
- "master"

supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
- 6705
- 6706
- 6707

ui.port: 8081

storm.local.dir: “/opt/apache-storm-1.0.3/data"



3.1 Configure the zk server
master\slave to correspond to the hostname of the computer. The configuration file for hostnames is mostly /etc/hosts.
quote

For example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 localhost.localdomain localhost
192.168.1.195 debian.localdomain debian In
general, the content of hosts is about the definition of the host name (Hostname), each line is a host, and each line is defined by It consists of three parts, each part is separated by a space. The lines starting with # are used for description and are not interpreted by the system.
The first part: the network IP address.
The second part: hostname.domain name, note that there is a half-width dot between the hostname and the domain name.
The third part: hostname (hostname alias), which is actually the hostname.
Of course, each line can also have two parts, that is, the host IP address and host name; for example, 192.168.1.195 master

quote

open: storm.zookeeper.servers:
     - "master"
     - "slave"
open: drpc.servers:
     - "master"


3.2 Configure slot port number
   supervisor.slots.ports: [slot port number] Submit top, each top has two workers to work, one sport and one blot, each worker will occupy a port
supervisor.slots.ports:
 - 6700
 - 6701
 - 6702
 - 6703
 - 6704
 - 6705
 - 6706
 - 6707


3.3 Configure stormUI port number
ui.port:8081


3.4 Configure the storm local directory
  to create a directory: the data folder
    mkdir data under the root directory apache-storm-1.0.3;
    pwd view the directory path
  and add the configuration file directory path: storm.local.dir: “/opt/apache-storm-1.0. 3/data"

Fourth, start the test
Storm is a fail-fast system, which means that these processes can stop at any time due to an error. Because of Storm's design, it is safe to stop at any time, and resume correctly when the process is restarted. This is why Storm keeps processes stateless -- if Nimbus or supervisors restart, running topologies are unaffected.

1. First pass the configured Storm to each child node. Install the plugins required by Storm on each child node.

2. Configure myid in zookeeper and start zookeeper.

3. Start the storm node and UI, and use nohup to hang in the background for execution. "&" is the executor in the background, if not, the command line will be stuck and will not be executed downward.
1) Start Nimbus
   and run the command "bin/storm nimbus &" under the master machine to check whether the configuration is wrong.
2) Start Supervisor
   and run the command "bin/storm supervisor &" under each worker machine. The Supervisor daemon is responsible for starting and stopping the worker process on that machine
3) Start the UI
  Run the command "bin/storm ui &" from the master machine to run the Storm UI (you can access a site from a browser that provides diagnostic information for the cluster and topologies). Enter "http://{nimbus host}:8081" in your browser to access the UI.
3) Start logviewer and
  run the command "bin/storm logviewer &" under the master machine to view the work log.







Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326388890&siteId=291194637