1. Install jdk in linux
(1) Download JDK to the opt/install directory, create the directory soft under opt, and extract it to the current directory tar xvf ./jdk-8u321-linux-x64.tar.gz -C /opt/soft/ (2) Name change (3) Configure environment variables: vim /etc/profile #JAVA_HOME export JAVA_HOME=/opt/soft/jdk180 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin (4) Update resources and test whether the installation is successful source /opt/profile java |
2.Hadoop operating environment construction
2.1 Install jDK: see above |
2.2 Download and install Hadoop Unzip it to the soft directory and rename it to hadoop313 Change the user to root Configure environment variables: vim /etc/profilre; after the configuration is complete, source /etc/profile Create data directory data Switch to the hadoop directory, view the files in the directory, and prepare for configuration cd /opt/soft/hadoop313/etc/hadoop |
2.3 Configuring stand-alone Hadoop (1) Configure core-site.xml (2) Configure hdfs-site.xml 1) Edit hadoop-enc.sh 2) Start configuring hdfs-site.xml (3) Configure yarn-site.xml (4) Configure workers and change the workers content to kb129 (host name) (5) Configure mapred-site.xml |
2.4 Start testing hadoop (1) Initialize the cluster hadoop namenode -format in the bin directory (2) Set up password-free login 回到根目录下配置kb129免密登录:ssh-keygen -t rsa -P "" 将本地主机的公钥文件(~/.ssh/id_rsa.pub)拷贝到远程主机 kb128 的 root 用户的 .ssh/authorized_keys 文件中,通过 SSH 连接到远程主机时可以使用公钥进行身份验证:cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys 将本地主机的公钥添加到远程主机的授权密钥列表中,以便实现通过 SSH 公钥身份验证来连接远程主机:ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 root@kb128 (3)启动/关闭、查看 [root@kb129 hadoop]# start-all.sh [root@kb129 hadoop]# stop-all.sh [root@kb129 hadoop]# jps 15089 NodeManager 16241 Jps 14616 DataNode 13801 ResourceManager 14476 NameNode 16110 SecondaryNameNode
(4)网页测试:浏览器中输入网址:http://192.168.142.129:9870/ |