[Hadoop] to develop plug-in installation

Foreword

Eclipse is an open source, Java-based extensible development platform. On its own, it is only a framework and a set of services for building development environments by plug-in component. Fortunately, Eclipse comes with a standard set of plug-ins, including Java development tools (Java Development Tools, JDT).

Eclipse plug-in mechanism is a lightweight software component architecture. On the client platform, Eclipse plug-ins to use all the additional features, such as support for languages ​​other than Java. Existing separate plug-ins have been able to support C / C ++ (CDT), Perl, Ruby, Python, telnet, and database development. Plug-in architecture can support any extension added to the existing environment, such as configuration management, but by no means confined to support a variety of programming languages.

Eclipse design philosophy is: Everything plug. Eclipse core is very small, all other functions are attached on the Eclipse core in the form of plug-ins.

Eclipse basic kernel includes: graphics API (SWT / Jface), Java development environment plug-ins (JDT), Plug-in Development Environment (PDE) and so on.

Hadoop is a powerful parallel software development framework, which allows task parallel processing on a distributed cluster, thus improving efficiency. However, it also has some disadvantages, such as the difficulty of coding, debugging Hadoop program is large, such disadvantages developers a direct result of the high barriers to entry, a large development effort.

Thus, developers Hadoop Hadoop in order to reduce the difficulty of the Eclipse plug Hadoop developed, it can be embedded directly into the Hadoop development environment, enabling a graphical development environment, reduces the difficulty of programming.

Hadoop Hadoop Eclipse is a development environment plug-ins, you need to configure Hadoop-related information before installing the plugin. When you create a user Hadoop programs, Eclipse plug-in will automatically import Hadoop programming interface jar file, so users can encode, commissioning and operation Hadop program in the graphical interface of the Eclipse plug-in, but also through Eclipse plug-in to view the real-time state of the program, error messages and operating results. In addition, users can also manage and view of HDFS through Eclipse plug-ins. All in all, Hadoop Eclipse plug-in is not only easy to install, use is also very convenient. It's powerful, especially in terms of reducing the Hadoop programming very difficult for developers is a good helper Hadoop entry and development!

Eclipse plug mounting method generally has the following four: first: direct replication method, second: use the link file method, third: the graphical interface built using eclipse installation, fourth: dropins installed using plug-ins, the present experimental Hadoop develop plug-ins to install and use the eclipse comes with a graphical interface installation.

Already installed Hadoop and Eclipse environments, the plug-in installed on Hadoop Eclipse tools and plug-ins to configure.

** 1.Eclipse development tools and Hadoop has been installed by default, installed in the / apps / directory.

2. Create / data / hadoop3 local directory in Linux, used to store the desired file. **

mkdir -p / data / hadoop3
switching to the directory / data / hadoop3 directory and use wget command to download the required plug hadoop-eclipse-plugin-2.6.0.jar.

cd /data/hadoop3
wget http://59.74.172.143:60000/allfiles/hadoop3/hadoop-eclipse-plugin-2.6.0.jar

3. The plug hadoop-eclipse-plugin-2.6.0.jar, from / data / hadoop3 directory, copied to the widget catalog / apps / eclipse / plugins is.

cp /data/hadoop3/hadoop-eclipse-plugin-2.6.0.jar /apps/eclipse/plugins/

4. Go to the ubuntu graphical interface, eclipse double-click the icon to start the eclipse.
Here Insert Picture Description
5. In Eclipse window interface, Click Window => Open Perspective => Other .

Here Insert Picture Description

A pop-up window.
Here Insert Picture Description

Select the Map / Reduce, and click OK, you can see the window, there are three changes. (Project Browser left, upper right corner of the layout of the operation switch panel window)

Here Insert Picture Description
If in windows, you need to manually adjust the panel window Map / Reduce Locations panel, to operate, click the window => show view => Other.

Here Insert Picture Description
In the window that pops up, select the Map / Reduce Locations option, and click OK.

Here Insert Picture Description
This will bring up the View window Map / Reduce Location.

6. Add Hadoop configuration, connected Hadoop cluster.

Here Insert Picture Description

Here add Hadoop configuration.

Here Insert Picture Description
Location name, a name for this configuration is played.

DFS Master, is connected to the host name and port number of HDFS.

Click Finish to save the configuration.

7. also a need to ensure that in Hadoop HDFS related process has started. In the terminal command line, enter jps view the process status.

jps
if there hdfs related processes, such as Namenode, Datanode, secondarynamenode, you'll need to switch to the sbin directory under HADOOP_HOME, start hadoop.

cd /apps/hadoop/sbin
./start-all.sh

8. Expand the left side of the Project Explorer view, you can see HDFS directory structure.

Here Insert Picture Description
9. FIG found on the HDFS, not stored in any directory. Let's create a directory that detect plug-ins are available.

Here Insert Picture Description
Right myhadoop files under the folder, the pop-up menu, click Create new directory.

Here Insert Picture Description
Enter the name of the directory, then click OK to create a successful directory.

Here Insert Picture Description
Right folder and click Refresh, refresh available HDFS directory.
Hadoop development of this plug-in has been installed!

Guess you like

Origin blog.csdn.net/weixin_44039347/article/details/91464226