storm started steps:
1- Start zookeeper (storm relies ZooKeeper)
zkServer.sh start
2- Start nimbus (just start the daemon thread a master node in the cluster is responsible for distributing the code, assign tasks to nodes, and monitor host failure)
storm nimbus
3- Start supervisor (supervisor daemon running each working node, the node is responsible for monitoring the work has been assigned the job of host, start and stop nimbus been assigned work processes)
storm supervisor
4- Start UI (monitoring page, simply start a server process called core)
storm ui
5- Delete Topology
storm kill topname
6- activation topology
storm active topname
7- topology is not activated
storm deactive topname
8- listed topology
storm list
Start a command zookeeper (more than one):
#! / bin / shell echo " Start zookeeper Server ... " #hosts is installed in the host name of the zookeeper's hosts = " Master node1 node2 node3 " # zkServer.sh start with a loop to execute the script separately for Host in $ hosts do echo " -------- -------- $ Host " SSH $ Host " Source / etc / Profile; /home/hadoop/zookeeper-3.4.10/bin/zkServer.sh Start " DONE
Start storm
#!/bin/bash echo "start storm server..." source /etc/profile storm nimbus >${STORM_HOME}/nimbus.log 2>&1 & storm ui >${STORM_HOME}/ui.log 2>&1 & hosts="node1 node2 node3" for host in $hosts do echo "--------$host--------" ssh $host "source /etc/profile; ${STORM_HOME}/bin/storm supervisor >${STORM_HOME}/supervisor.log 2>&1 &" echo "OK!" done
Close storm
#!/bin/bash source /etc/profile echo "stop storm server..." kill -9 `ps -ef|grep daemon.nimbus | awk '{print $2}'|head -1` >${STORM_HOME}/nimbus.log 2>&1 kill -9 `ps -ef|grep core | awk '{print $2}'|head -1` >${STORM_HOME}/ui.log 2>&1 hosts="node1 node2 node3" for host in $hosts do echo "--------$host--------" ssh $host "source /etc/profile; /home/hadoop/shelltools/stop-supervisor.sh >${STORM_HOME}/supervisor.log 2>&1 &" echo "OK!" done