Problems encountered in hadoop learning: change the linux hostname

1) Modify the value after "HOSTNAME" in the "/etc/sysconfig/network" file to the name we planned.
The setting items of "/etc/sysconfig/network" are as follows:
NETWORKING Whether to use the network 
GATEWAY Default gateway 
IPGATEWAYDEV Default gateway interface name 
HOSTNAME host name 
DOMAIN domain name

Insert picture description here
2) Configure the hosts file (required) The
"/etc/hosts" file is used to configure the DNS server information that the host will use, and is used to record the corresponding [HostName and IP] of each host connected within the LAN. When the user is making a network connection, the file is first searched for the IP address corresponding to the corresponding host name (or domain name).
We need to test whether the two machines are connected, generally use "ping machine's IP", if we want to use "ping machine's hostname" to find a machine with that name,

The solution is to modify the "/etc/hosts" file, and the problem can be solved by writing the one- to- one correspondence between the IP address of each host in the LAN and the HostName into this file

For example: the machine is "Master.Hadoop:192.168.1.2" and the machine is "Salve1.Hadoop:192.168.1.3", use the command "ping" to remember the connection test. The test results are as follows
Insert picture description here
. As you can see from the figure above, you can test the IP address directly and you can ping it. But if you ping the host name, the ping fails and
prompts "unknown host".
Because there is no 192.168.1.3 Slave1.Hadoop" in the "/etc/hosts" file of "Master.Hadoop", this machine cannot resolve the host name of the machine "Slave1.Hadoop";
in the Hadoop cluster configuration, it is required In the "/etc/hosts" file, add the IP and hostnames of all machines in the cluster. In
this way, the Master and all the slave machines can communicate not only through IP, but also through the host name. So on all machines At the end of the "/etc/hosts" file, add each other's ip address host name
192.168.1.2 Master.Hadoop
192.168.1.3 Slave1.Hadoop
192.168.1.4 Slave2.Hadoop

The /etc/hosts file configuration of all machines in hadoop must be the same

Guess you like

Origin blog.csdn.net/sinat_35803474/article/details/103795487