Hadoop and Eclipse development environment to build

First, configure the environment variables as follows:
My Computer, right - "Properties -----" Advanced System Settings ------ "Environment Variables
-" add new variables in the system:
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
This get away part of
the following open eclipse
open position where the eclipse file before opening ---- "
copy the hadoop-eclipse-plugin-2.6.0.jar to the eclipse installation directory plugins
start eclipse


core-site.xml configuration file:

    <name>fs.defaultFS</name>


    <value>hdfs://master:9000</value>


<description>The name of the default file system.</description>
    <name>hadoop.tmp.dir</name>


    <!-- 注意创建相关的目录结构 -->


        <value>/usr/setup/hadoop/temp</value>


    <description>A base for other temporary         directories.</description>

1 download plug-ins

hadoop-eclipse-plugin-2.5.1.jar

After downloading the source code required to compile on github. As used herein have been compiled plug-ins

2 Configure Plug

The plug-in into the ... \ under the eclipse \ plugins directory and restart eclipse, configure Hadoop installation directory,

如果插件安装成功,打开Windows—Preferences后,在窗口左侧会有Hadoop Map/Reduce选项,点击此选项,在窗口右侧设置Hadoop安装路径。(windows下只需把hadoop-2.5.1.tar.gz解压到指定目录)

Here Insert Picture Description

3 配置Map/Reduce Locations

 打开Windows—Open Perspective—Other,选择Map/Reduce,点击OK,控制台会出现:

Here Insert Picture Description

Right-new Hadoop location configuration hadoop: Input

Location Name, any name.

Configuration Map / Reduce Master and DFS Mastrer, Host Port, and can be configured with the same core-site.xml settings.
Here Insert Picture Description

Click "Finish" button to close the window.

Click on the left of DFSLocations-> master (Previous configured location name), as can be seen user, represents a successful installation

Here Insert Picture Description

4 wordcount examples

  File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。在WordCount项目里新建class,名称为WordCount,代码如下:

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

public static class TokenizerMapper extends Mapper<Object,Text,Text,IntWritable>{


    private final static IntWritable one=new IntWritable(1);


    private Text word =new Text();


    public void map(Object key,Text value,Context context) throws IOException,InterruptedException{


        StringTokenizer itr=new StringTokenizer(value.toString());


        while (itr.hasMoreTokens()) {


            word.set(itr.nextToken());


            context.write(word, one);


        }


    }


}


public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {


    private IntWritable result = new IntWritable();


    public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {


        int sum = 0;


        for (IntWritable val : values) {


            sum += val.get();


        }


        result.set(sum);


        context.write(key, result);


    }


}





public static void main(String[] args) throws Exception {


    Configuration conf = new Configuration();


    Job job = new Job(conf, "word count");


    job.setJarByClass(WordCount.class);


    job.setMapperClass(TokenizerMapper.class);


    job.setCombinerClass(IntSumReducer.class);


    job.setReducerClass(IntSumReducer.class);


    job.setOutputKeyClass(Text.class);


    job.setOutputValueClass(IntWritable.class);


    FileInputFormat.addInputPath(job, new Path("hdfs://192.168.11.134:9000/in/test*.txt"));//路径1


    FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.11.134:9000/output"));//输出路径


    System.exit(job.waitForCompletion(true) ? 0 : 1);


}

}

The above since paths 1 and 2 have been defined in the code, which need not be defined in a configuration file, when the above paths 1 and 2 codes:

FileInputFormat.addInputPath(job, new Path(otherArgs[0]));

FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

This needs to be configured to run Path: Right-class Run As-> Run Configurations

Here Insert Picture Description
Red part of the file path on hdfs configuration,

Click or or run: Run on Hadoop, operating results will appear in the DFS Locations. If an update is running, right-DFS Locations, point disconnect update

operation result:

Here Insert Picture Description

package com.hpe.test;

import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.omg.Messaging.SyncScopeHelper;

public class TestHdfs {
	
	//引入配置文件
	Configuration conf=null;
	//创建文件流----引用的是hadoop内部封装的方法
	FileSystem fs=null;	
	
	@Before
	public void conn() throws Exception{
		conf=new Configuration(true);//设置是否读取配置信息
		fs=FileSystem.get(conf);
	}
	@After
	public void close() throws Exception{
		fs.close();
	}
	
	//创建、删除、重命名、判断是否存在
	
	//创建文件
	@Test
	public void mkdir() throws Exception{
		Path f=new Path("/aaa");
		//判断是否存在
		if(fs.exists(f)){
			//删除
			fs.delete(f);
		}
		//创建
		fs.mkdirs(f);
		
	}
	//自己补充完成
	public void exist(){
		
	}
	//重命名
	@Test
	public void rn() throws Exception{
		Path p1 = new Path("/user/root/passwd");
		Path p2 = new Path("/user/root/haha.txt");
		boolean rename = fs.rename(p1, p2);
		System.out.println(rename);
	}
	
	
	

	//上传文件
	@Test
	public void uploadFile() throws Exception{
		
		//输出位置
		Path inputFile=new Path("/tmpDir/haha.txt");
		//相当于文件内容的输出
		FSDataOutputStream output = fs.create(inputFile);
		
		//输入位置,相当于文件内容的输入
		InputStream input=new BufferedInputStream(new FileInputStream(new File("d:\\124.txt")));
		
		IOUtils.copyBytes(input, output, conf, true);
		
	}
	//下载文件
	@Test
	public void downloadFile() throws Exception{
		

		//上传文件到HDFS
		Path src = new Path("/tmpDir/haha.txt");
		//输入源:将我集群中的文件作为输入
		FSDataInputStream input=fs.open(src);
		//输出位置
		FileOutputStream output=new FileOutputStream("F://aa.txt");
		

		IOUtils.copyBytes(input, output, conf, true);
	}
	//上传下载

}

Guess you like

Origin blog.csdn.net/sincere_love/article/details/91895252