Commands for common components in Hadoop clusters (to be added later)

1. Hadoop HDFS:

Start HDFS: start-dfs.sh
Shut down HDFS: stop-dfs.sh
Format NameNode: hdfs namenode -format
View file system status: hdfs dfsadmin -report
Create directory: hdfs dfs -mkdir /path/to/directory
Upload local files to HDFS: hdfs dfs -put /path/to/local/file /path/to/hdfs/directory
Download HDFS files to local: hdfs dfs -get /path/to/hdfs/file /path/to/local/directory
View HDFS File content: hdfs dfs -cat /path/to/hdfs/file
Delete HDFS file: hdfs dfs -rm /path/to/hdfs/file

2. Hadoop YARN:

Start YARN: start-yarn.sh
Shut down YARN: stop-yarn.sh
View YARN node status: yarn node -list
View YARN application status: yarn application -list
Submit YARN application: yarn jar /path/to/app.jar com.example.Application arg1 arg2
Kill YARN application: yarn application -kill application_id

3. Hadoop MapReduce:

Submit MapReduce job: hadoop jar /path/to/job.jar com.example.Job input_path output_path
View MapReduce job status: mapred job -list
Kill MapReduce job: mapred job -kill job_id

4. Hive:

Start the Hive service: hive --service hiveserver2.
Close the Hive service: hive --service hiveserver2 --stop.
Connect to the Hive service: beeline -u jdbc:hive2://localhost:10000.
View the Hive table list: show tables;
create a Hive table: create table table_name (column1 type1, column2 type2, …)
Insert data into Hive table: insert into table table_name values ​​(value1, value2, …)
Query Hive table data: select * from table_name
Delete Hive table: drop table table_name

5. Spark

Start Spark cluster: start-all.sh
Shut down Spark cluster: stop-all.sh
Start Spark Shell: spark-shell
Submit Spark application: spark-submit --class com.example.Application /path/to/app.jar arg1 arg2
View Spark application status: spark-submit --status application_id
Kill Spark application: spark-submit --kill application_id

6. ZooKeeper:

Start ZooKeeper: zkServer.sh start
Shut down ZooKeeper: zkServer.sh stop
Connect to ZooKeeper client: zkCli.sh -server localhost:2181
Create ZooKeeper node: create /path/to/node data
Get ZooKeeper node data: get /path/to /node
Update ZooKeeper node data: set /path/to/node new_data
Delete ZooKeeper node: delete /path/to/node

7. Redis:

Start Redis: redis-server /path/to/redis.conf
Shut down Redis: redis-cli shutdown
Connect to the Redis client: redis-cli -h hostname -p port -a password
Set the Redis key value pair: set key value
Get Redis Key-value pair: get key
Delete Redis key-value pair: del key

8.Good

1. Start and stop the Flink cluster
. Start the Flink cluster: ./bin/start-cluster.sh
. Stop the Flink cluster: ./bin/stop-cluster.sh
2. Submit and cancel the job.
Submit the job: ./bin/flink run <path /to/job.jar>Cancel
job: ./bin/flink cancel
3. View job and task status
View job list: ./bin/flink list
View job status: ./bin/flink list -r
View task status: . /bin/flink list -t
4. View the job log
View the job log: ./bin/flink log
5. Tell Flink that the task has been completed: ./bin/flink advance
6. View the Flink configuration: ./bin/flink run -m : <path/to/job.jar>
7. View Flink Web UI
The default port of Flink Web UI is 8081, which can be accessed through the browser: http://:8081

9.Flume

  1. Starting Flume Agent
    To start Flume Agent, you need to use the flume-ng command and specify the path to the configuration file. For example:
$ flume-ng agent --conf-file /path/to/flume.conf --name agent-name
#这里的`/path/to/flume.conf`为Flume Agent的配置文件路径,`agent-name`为Flume Agent的名称。
  1. Stop Flume Agent
    To stop Flume Agent, you need to use the kill command and specify the process ID of Flume Agent. For example:
$ ps -ef |grep Flume 
$ kill -9 pid
#这里的`pid`为Flume Agent的进程ID,可以使用ps命令查看: 。
  1. Checking the status of the Flume Agent
    To check the status of the Flume Agent, you need to use the flume-ng command and specify the name and command of the Flume Agent. For example:
$ flume-ng agent --name agent-name --status
#这里的`agent-name`为Flume Agent的名称,`--status`为查看状态的命令。
  1. View Flume Agent's logs
    To view Flume Agent's logs, you need to use the tail command and specify the Flume Agent's log file path. For example:
$ tail -f /path/to/flume.log
#这里的`/path/to/flume.log`为Flume Agent的日志文件路径。
  1. Test whether the Flume Agent configuration is correct.
    To test whether the Flume Agent configuration is correct, you need to use the flume-ng command and specify the path and command of the configuration file. For example:
$ flume-ng agent --conf-file /path/to/flume.conf --name agent-name --conf-test
#这里的`/path/to/flume.conf`为Flume Agent的配置文件路径,`agent-name`为Flume Agent的名称,`--conf-test`为测试配置的命令。
  1. To view the help information of Flume Agent,
    you need to use the flume-ng command and the specified command. For example:
$ flume-ng help
#这里的`help`为查看帮助信息的命令。

Guess you like

Origin blog.csdn.net/Wxh_bai/article/details/129999731