Big data combat Linux Ubuntu 20.04.1 build MapReduce platform

1. Goal

1.1 Run in the graphical window
1.2 Finish Eclipse and plug-in installation
1.3 JDK installation

2. Preliminary preparation

2.1 Install the system

Click here for details

2.2 Node creation user

2.2.1 Create Group

(You can operate under the ordinary user of bass, you can operate under the root user. If you need to increase the privileges of sudo in bass, the root user does not need it.)
$ sudo useradd -u 285 -g 285 -m -s /bin/bash angel
Insert picture description here

The user number is 285, the user group number is 285, and the user name is angel.

2.2.2 Set the angel user group and its password

sudo gpasswd -a angel sudo
adds the angel user to the sudo group.
Insert picture description here
sudo passwd angel
password is 123
Insert picture description here

2.2.3 Switch angel user

su-angel
password: 123 At
Insert picture description here
this point, the angel user is created successfully.

2.3.Java installation

2.3.1 Nodes create app directory and modify /app file attributes

3.1.1 Create app directory
Create
sudo mkdir /app under angel user
Insert picture description here
3.1.2 Modify /app file attribute
sudo chown -R angel:angel /app

2.3.2 Edit jdk environment variables for all nodes

vi /home/angel/.profile
add 2 lines at the end
Insert picture description here

2.3.3 All node jdk environment variables take effect

source /home/angel/.profile

2.3.4 Upload jdk and compress jdk to angel user

Use winscp tool, log in as root user
Insert picture description here

2.3.5 Unzip the jdk compressed package and place it in the /app directory

cd / app
tar xzvf /home/angel/jdk-8u261-linux-x64.tar.gz -C / app

Insert picture description here
Insert picture description here

2.3.6 Test

java -version
javac -version So far
Insert picture description here
, Java is installed successfully.

3 Start the Hadoop cluster

3.1 Start dfs

start-dfs.sh
Insert picture description here

3.2 Start yarn

start-yarn.sh
Insert picture description here

3.3 Start JobHistoryServer

mr-jobhistory-daemon.sh start historyserver
Insert picture description here

3.4 View process

3.4.1 master node

jps
Insert picture description here

3.4.2 slave1, slave2 nodes

This is the end of jps
Insert picture description here
Insert picture description here
, and the Hadoop cluster has been successfully started!

4 Upload Eclipse

4.1 Use Winscp to upload eclipse to the /root directory

eclipse-java-oxygen-3-linux-gtk-x86_64.tar.gz
Insert picture description here
Insert picture description here

4.2 Unzip eclipse to the /app directory

cd / app
tar xvzf /root/eclipse-java-oxygen-3-linux-gtk-x86_64.tar.gz
Insert picture description here

5 Upload hadoop-eclipse-plugin

5.1 Use Winscp to put hadoop-eclipse-plugin-2.6.0.jar in the /app/eclipse/plugins/ directory

Insert picture description here

5.2 Check whether the upload is successful

cd /app/eclipse/plugins/
ll hadoop* The Insert picture description here
hadoop upload is successful here!

6 Download hadoop-2.8.5

Download hadoop-2.8.5 from the master node to
scp -r angel@master:/app/hadoop-2.8.5 /app in the /app directory on the desktop node
because the master and desktop do not have ssh protocol, so you need to enter the password
Insert picture description here
Insert picture description here
in the terminal, Check whether the download is successful.
Insert picture description here
So far, hadoop-2.8.5 downloaded successfully.

7 Modify file attributes

chown -R eclipse/ hadoop-2.8.5/ jdk1.8.0_261/
Check if the modification is successful
cd /home/angel/app
ll
Insert picture description here
to the file attribute modification successfully!

8 start eclipse

Conditions: it must be started on the desktop; it must be entered as an angel user; it must be restarted; the hadoop cluster must be started.
If you can't enter with angel users, click the link below to solve this problem.
Solution
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Click Launch.
Open Windows-Preferences in the eclipse window, there is "Hadoop Map/Reduce" option on the left side of the window, which means the installation is successful!

Insert picture description here
Open "File"-"New"-"Project" in the eclipse window interface, and there is a "Map/Reduce Project" option, which means the installation is successful. At this point, the MapReduce platform is successfully built!
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_45059457/article/details/109141372