Big Data commonly start command

A large cluster of data:

1, zookeeper cluster operations:

Start: bin / zkServer.sh start [in order hadoop102, hadoop103, start zookeeper on hadoop104 node]

Close: bin / zkServer.sh stop

View status: bin / zkServer.sh status

  

2, Kafka cluster operations (you must first start zookeeper):

Start: bin / kafka-server-start.sh -daemon config / server.properties [turn in hadoop102, hadoop103, start kafka on hadoop104 node]

Close: bin / kafka-server-stop.sh

创建topic:bin/kafka-console-producer.sh --broker-list hadoop102:9092 --topic recommender

View topic Information: bin / kafka-topics.sh --zookeeper localhost: 2181 --list

 

3, hadoop cluster operations:

格式化 namenode: Hadoop namenode -format

hdfs start: sbin / start-dfs.sh [start] node in hadoop102

yarn start: sbin / start-yarn.sh [start] node in hadoop103

hdfs access address: http: // hadoop102: 50070 /

yarn access address: http: // hadoop102: 16010 /

 

hive start (first start hadoop cluster): bin / hive


4, spark cluster operations:

Start: sbin / start-all.sh [start] node in hadoop102

 

 Second, data manipulation

1, redis operation (set password):

redis default installation location: / usr / local / bin

redis.conf configuration:

Host IP bindings bind 192.168.1.102 #

requirepass 123456 # password

Remote command execution server: redis-cli -h IP address port -a -p password

Start: redis-server /myredis/redis.conf

Connection: redis-cli -p 6379 -a 123456

Close: redis-cli -p 6379 -a 123456 shutdown

 

redis command:

lpush userId:4867 231449:3.0

lrange userId:4867 0 -1

 

2, mongodb operation:

Start: mongod -config /opt/module/mongodb-3.4.3/data/mongodb.conf

Close: mongod -shutdown -config /opt/module/mongodb-3.4.3/data/mongodb.conf

Access: mongo

View mongodb whether to start:

netstat -nltp | grep 27017

ps -ef | grep mongodb

View table information: db.table.find () pretty ().

The Query Field: db.table.find ({userId: 4867})

 

3, sqoop operation

Test whether Sqoop able to successfully connect to the database: bin / sqoop list-databases --connect jdbc: mysql: // hadoop102: 3306 / --username root --password 123456

 

4, flume cluster startup:

bin/flume-ng agent -c conf -f job/file-flume-hdfs.conf -n a2 -Dflume.root.logger=INFO,console

bin/flume-ng agent -c conf -f ./conf/log-kafka.properties -n agent -Dflume.root.logger=INFO,console

 

Third, the network communication tool netcat:

Installation: sudo yum install -y nc

Server: nc -lk 44444

Client: nc hadoop102 44444

Guess you like

Origin www.cnblogs.com/wjh123/p/11537118.html