CentOS7+Hadoop2.7 configuration process (non-installation tutorial)

CentOS7+Hadoop2.7 configuration process (non-installation tutorial)


(Optional) Time synchronization

Because we are a virtual machine clone, this step is omitted

yum install ntp
systemctl start ntpd
systemctl enable ntpd

Configure hostname

#直接永久生效
hostnamectl set-hostname master

Fixed static IP, and network mapping

The default is DHCP, that is, dynamic IP allocation. DHCP is introduced in a word in the group file. We need to become a static IP, that is, adjust it to static.


In NAT8 mode.

添加`/etc/syconfig/network-scripts/ifcfg-ens33`网卡配置文件


NETMASK:指定子网掩码(默认255.255.255.0)

GETWAY:网关  去VMware菜单栏编辑--》虚拟网络编辑器--》选择 “VMnet8 NAT 模式”--》点击下方“NAT设置”按钮,弹出NAT设置窗口查看网关IP

DNS1:指定上网用的DNS IP地址,114.114.114.114

ifconfigThe ens** in the command is the network card we want to look at.

ifconfig
第一步:cd /etc/sysconfig/network-scripts

第二步:vim ifcfg-ens33

        1 TYPE="Ethernet"
        2 PROXY_METHOD="none"
        3 BROWSER_ONLY="no"
        4 BOOTPROTO="static"  -------> 默认是DHCP,修改为static
        5 DEFROUTE="yes"
        6 IPV4_FAILURE_FATAL="no"
        7 IPV6INIT="yes"
        8 IPV6_AUTOCONF="yes"
        9 IPV6_DEFROUTE="yes"
       10 IPV6_FAILURE_FATAL="no"
       11 IPV6_ADDR_GEN_MODE="stable-privacy"
       12 NAME="ens33"
       13 UUID="db2a7b20-7b5a-40ad-9879-f7543bbc5ffe"
       14 DEVICE="ens33"
       15 ONBOOT="yes"
       16 IPV6_PRIVACY="no"
       17 ZONE=public
       18 IPADDR=192.168.175.134  ----------> 设置静态ip地址

       19 NETMASK=255.255.255.0 ------> 子网掩码
    
       20 GATEWAY=192.168.175.2 -------> 网关

       21 DNS1=114.114.114.114 -------> 域名解析器

第三步:systemctl restart network -----> 重启网络


Can be set directly under the graphical interface

Solve no ens33

systemctl stop NetworkManager                 临时关闭
systemctl disable NetworkManager              永久关闭网络管理命令
systemctl start network.service               开启网络服务

Turn off firewall

systemctl stop firewalld.service
systemctl disable firewalld
firewall-cmd --state 
firewall-cmd --reload

If you have a desktop, you can directly

setup

Configure the Host list to correspond

vi etc/hosts

192.168.79.129 master
192.168.79.130 node1
192.168.79.130 node2


用ping命令来测试

SSH password-free login

This is mutual confidentiality


进入node1:

[root@node1 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa   #获得自己的密钥。

[root@node1 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys  #将自己的密钥添加到自己的私钥里。做到免密钥。

[root@node1 ~]# scp -r ~/.ssh/id_dsa.pub root@node2:/tmp/    #将node1的密钥传到node2服务器的tmp中。

进入node2:

[root@node2 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa   #获得自己的密钥。

[root@node2 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys  #将自己的密钥添加到自己的私钥里。做到免密钥。

[root@node2 ~]# scp -r ~/.ssh/id_dsa.pub root@node1:/tmp/    #将node2的密钥传到node1服务器的tmp中。

添加密钥:

[root@node1 ~]# cat /tmp/id_dsa.pub >> ~/.ssh/authorized_keys  #将node1的密钥添加到自己的私钥里。

[root@node2 ~]# cat /tmp/id_dsa.pub >> ~/.ssh/authorized_keys  #将node1的密钥添加到自己的私钥里。

测试:

[root@node1 ~]#  ssh node2  #不用输入密码直接进入node2

[root@node2 ~]#  ssh node1  #不用输入密码直接进入node1

Java must be installed and remember the path

#一般放在/usr/local/java下

#环境变量一般放在/etc/profile.d/下
java.sh

export JAVA_HOME=/usr/local/java
export PATH=$PATH:$JAVA_HOME/bin
export JRE_HOME=/usr/local/java/jre
export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib

source /etc/profile

Direct scp copy 3 copies

scp -r /usr/lacao/java/ root@node1:/usr/local/
scp -r /usr/lacao/java/ root@node2:/usr/local/

scp  java.sh root@node1:/etc/profile.d/java.sh

Configure Hadoop environment variables

vim /etc/profile.d/hadoop.sh        #配置环境变量

export HADOOP_HOME=/home/hfut/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

Similarly, scp copy 3 copies

Hadoop related configuration files

All relevant configurations are carried out under hadoop/etc/

Configure scp to copy 3 copies

hadoop-env.sh

core-site.xml

hdfs-site.xml

yarn-site.xml

In fact, most of them are default items, not worth it. That can be unworthy. The real cluster is also automated deployment.

Start the cluster

the first time

hdfs namenode -format

Start the cluster

start-all.sh

Shut down the cluster

stop-all.sh

env.sh

core-site.xml

hdfs-site.xml

yarn-site.xml

In fact, most of them are default items, not worth it. That can be unworthy. The real cluster is also automated deployment.

Start the cluster

the first time

hdfs namenode -format

Start the cluster

start-all.sh

Shut down the cluster

stop-all.sh

Guess you like

Origin blog.csdn.net/qq_45175218/article/details/108876098