Modify the IP address
cd /etc/sysconfig/network-scripts
vi 第一个文件名或者有字母数字的文件名
Modify the configuration
BOOTPROTO=”static” #静态获取IP地址
ONBOOT=“yes” #开机使用本配置
IPADDR= #IP地址
NETMASK=255.255.255.0 # 网络子掩码
GATEWAY= #网关
DNS1= 114.114.114.114 #DNS配置
Network Service Restart
service network restart
Modify the mapping between host names do ip
vi /etc/hosts
ip is the IP address of the three machine and the host name to be modified under the same local area network
two copies of the following passed to the other two virtual machine and restart (IP address below actual operating configuration)
172.16.201.10 Master
172.16.201.11 slave1
172.16. 201.12 slave2
Into the respective hostname modify the hostname
vi /etc/hostname
master
Restart the virtual machine
reboot
Free Operation dense
three virtual machines operating according to the following public key
and the public key are placed in a three authorzied_keys passed in the other two have their own guaranteed between the three public
generate a public key
ssh-keygen
Is added to a node
ssh-copy-id -i .ssh/id_rsa.pub root@wangmaste
# Test whether free secret success
ssh localhost
Test the connection with other virtual machines
ssh 主机名
Install jdk
view on whether the machine has jdk
rpm -qa|grep jdk
Uninstall jdk
yum remove jdk文件名
Unzip good jdk, hadoop
tar xf jdk名 hadoop文件名 -C 安装路径
Set jdk, hadoop environment variable
#jdk
export JAVA_HOME=/opt/jdk #jdk安装路径
#hadoop
export HADOOP_HOME=/opt/Hadoop #hadoop安装路径
PATH=$PATH:$JAVA_HOME/bin$HADOOP_HOME/bin:$HADOOP_HOME/sbin
The configuration file to take effect
source /etc/profile
Enter hadoop configuration file
cd /opt/hadoop/etc/Hadoop
Modify the file hadoop-env.sh
vi hadoop-env.sh
Find a setting environment variables jdk statement to modify the installation path of jdk
export JAVA_HOME=/opt/jdk
Modify yarn-env.sh file
vi yarn-env.sh
Find a setting environment variables jdk statement to modify the installation path of jdk
export JAVA_HOME=/opt/jdk
Modified core-site.xml file
vi core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
Modify hdfs-site.xml file
vi hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
Modify mapred-site.xml file
copy mapred-site.xml.template file and modify the file name mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Modify yarn-site.xml file
vi yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapredue_shuffle</value>
</property>
<propety>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
Modify slaves file
vi slaves
# Add two slave nodes
slave1 #从节点的主机名
slave2 #从节点的主机名
Turn off the firewall
systemctl stop firewalld.service
Prohibition boot Firewall
systemctl disable firewalld.service
Copy the entire file passed to other virtual machine hadoop
scp -r hadoop folder path to a different host name @root: / storage path
formatted on a master virtual machine
hdfs namenode -format
Start hadoop
start-all.sh
View the process
jps
maste master node has four processes
there are three processes on other virtual machines from node
Hadoop achieve said process requires a fully distributed configuration complete!