Eclipse connects with Hadoop cluster

1. Installing this plug-in is very simple. Copy the hadoop-eclipse-plugin-2.2.0 file to the eclipse\plugins
directory of eclipse, and then start eclipse to complete the installation.
, There will be a sign of DFS locations on the project explorer on the left
2. There will be an additional hadoop map/reduce option in windows -> preferences, select this option, and then on the right, select the downloaded hadoop root directory
(on windows hadoop is just to call the jar package inside)
If you can see the above two points, the installation is successful.
2. Add the following content to the "C:\Windows\System32\drivers\etc\hosts" path of Windows:
192.168.80.101 hadoop1
192.168.80.102 hadoop2
192.168.80.103 hadoop3

3. After the
plugin , Start hadoop, and then you can create a hadoop connection. The
first
step Click Add:
location name: Just fill in this, I filled in: hadoop
Map/Reduce(V2) Master in this box
Host: It is the cluster machine where the jobtracker is located. Here we write hadoop1 (the host name can be written here, because the host mapping has been added before.)
Hort: It is the port of the jobtracker. Here we write 8032.
These two parameters are yarn-site.xml Inside yarn.resourcemanager.address Inside mapred.job.tracker Inside ip and port
DFS Master Inside this box
Host: is the cluster machine where the namenode is located, here is the hadoop1
Port: is the port of the namenode, here are
the two parameters of 9000 The ip and port in fs.defaultFS in core-site.xml
(Use M/R master host, if this checkbox is selected, it will be the same as the host in the Map/Reduce Master box by default, if not selected, you can Define the input yourself, here jobtracker and namenode are on the same machine, so they are the same, just check it)
user name: This is the user name to connect to hadoop, because I installed hadoop with the root user, and did not create other users , so use root.
Do not fill in the following. Then click the finish button, at this point, there is one more record in this view.
The third step, restart eclipse and re-edit the connection record just established, as shown in the figure, now we edit the advance parameters tab page
dfs.replication: the default here is 3,

dfs.data.dir is changed to /nosql/hadoop1/data
Change hadoop.tmp.dri to /tmp/hadoop1-root
and click finish, there will be an elephant under DFS Locations, and there will be a folder under it, that is, the root directory of hdfs, here is the directory structure of the distributed file system shown .
Right-click "hadoopàuseràroot" to try to create a "folder --test", and then right-click to refresh to view the folder we just created.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326988577&siteId=291194637