hadoop入门1:ERROR Cannot set priority of datanode process

Problem phenomenon:

When deploying hadoop-3.1.2, the following error occurred when starting hdfs:

Starting datanodes
zglinux: ERROR: Cannot set priority of datanode process 2905

solution:

         This problem has been troubled for a long time, and it is not searchable on Baidu, and I have successfully deployed in the company's environment, and the deployment failed when I returned home. I was very annoyed. Therefore, after the settlement, I vowed to write this blog to warn future generations. Possible solutions:

  • core-site.xml configuration error

      core-site.xml configures the hadoop component hdfs (distributed file system). This datanode is a member of hdfs, so it may be caused by a configuration error in core-site.xml --- mine is not;

  • hadoop-env.sh environment variable configuration problem

     It is possible that HDFS_DATANODE_SECURE_USER=root is configured in the environment variable , which leads to the error --- mine is for this reason. As for this configuration, I don’t know for the time being. As for the configuration, it was because of the "installation manual" posted on the network. With this, it may be that I know it and don’t know why ...

# -------- for set priority if datanode process-----error
export HDFS_DATANODE_SECURE_USER=root

Log after error:

         After configuring the above environment variables and reporting an error, a privileged-root-datanode-XXX.out file is generated in the log directory of hadoop, and the reason for the datanode error is recorded, as follows:

 

   Based on this, the younger brother had doubts + inspiration on the configuration file, and thus came up with a solution.

Correct phenomenon:

         Finally, I posted a correct result. I hope this will inspire everyone. I won’t write much about the installation of hadoop. There are a lot of them on the Internet. Thank you.

 

Guess you like

Origin blog.csdn.net/zhaogang1993/article/details/92727952