View
ps -ef | grep name hive
Zookeeper on and off
Open all
zkstart-all.shShut down node by node
zkServer.sh stop
Time synchronization
ntpdate ntp4.aliyun.com
Start HDFS
start up
start-all.sh
shut down
stop-all.sh
Start Spark
start up
cd /export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/bin
spark-shell
shut down
ctrl + z
Start Hbase
cd /export/servers/hbase-1.2.0-cdh5.14.0/bin
start up
start-hbase.sh
Enter Hbase shell window
[root@node001 conf]# hbase shell
Stop closing hbase
stop-all.sh
Close one by one
[root@node001 ~]# cd /export/servers/hbase-1.2.0-cdh5.14.0/bin
[root@node001 bin]# stop-hbase.sh
Safe Mode Off
hdfs dfsadmin -safemode leave
Hive startup
Start Hive MetaStore service in the background
nohup /export/servers/hive-1.1.0-cdh5.14.0/bin/hive --service metastore 2>&1 &
This is the hive query in spark
scala> spark.sql ("show databases"). Show
Kafka cluster operation
1. Create topic
Create a topic named test with three partitions and two copiesnode001 execute the following command to create topic
cd /export/servers/kafka_2.11-1.0.0
bin/kafka-topics.sh --create --zookeeper node001:2181 --replication-factor 2 --partitions 3 --topic test
2. View topic command
View topics existing in kafkaNode001 uses the following command to view the topic topics existing in kafka
cd /export/servers/kafka_2.11-1.0.0
bin/kafka-topics.sh --list --zookeeper node001:2181,node002:2181,node003:2181
3. Producer production data
Simulate producers to produce dataThe node001 server executes the following commands to simulate producers to produce data
cd /export/servers/kafka_2.11-1.0.0
bin/kafka-console-producer.sh --broker-list node001:9092,node002:9092,node003:9092 --topic test
4. Consumer consumption data
node002 server executes the following commands to simulate consumer consumption datacd /export/servers/kafka_2.11-1.0.0
bin/ kafka-console-consumer.sh --from-beginning --topic test --zookeeper node001:2181,node002:2181,node003:2181