Hadoop的MR作业支持链式处理

Hadoop的MR作业支持链式处理,类似在一个生产牛奶的流水线上,每一个阶段都有特定的任务要处理,比如提供牛奶盒,装入牛奶,封盒,打印出厂日期,等等,通过这样进一步的分工,从而提高了生产效率,那么在我们的Hadoop的MapReduce中也是如此,支持链式的处理方式,这些Mapper像Linux管道一样,前一个Mapper的输出结果直接重定向到下一个Mapper的输入,形成一个流水线,而这一点与Lucene和Solr中的Filter机制是非常类似的,Hadoop项目源自Lucene,自然也借鉴了一些Lucene中的处理方式。

举个例子,比如处理文本中的一些禁用词,或者敏感词,等等,Hadoop里的链式操作,支持的形式类似正则Map+ Rrduce Map*,代表的意思是全局只能有一个唯一的Reduce,但是在Reduce的前后是可以存在无限多个Mapper来进行一些预处理或者善后工作的。

下面来看下的散仙今天的测试例子,先看下我们的数据,以及需求。

数据如下:

Java代码 复制代码  收藏代码
  1. 手机 5000  
  2. 电脑 2000  
  3. 衣服 300  
  4. 鞋子 1200  
  5. 裙子 434  
  6. 手套 12  
  7. 图书 12510  
  8. 小商品 5  
  9. 小商品 3  
  10. 订餐 2  
手机 5000
电脑 2000
衣服 300
鞋子 1200
裙子 434
手套 12
图书 12510
小商品 5
小商品 3
订餐 2


需求是:

Java代码 复制代码  收藏代码
  1. /**  
  2.  * 需求: 
  3.  * 在第一个Mapper里面过滤大于10000万的数据 
  4.  * 第二个Mapper里面过滤掉大于100-10000的数据 
  5.  * Reduce里面进行分类汇总并输出 
  6.  * Reduce后的Mapper里过滤掉商品名长度大于3的数据 
  7.  */  
/**	
 * 需求:
 * 在第一个Mapper里面过滤大于10000万的数据
 * 第二个Mapper里面过滤掉大于100-10000的数据
 * Reduce里面进行分类汇总并输出
 * Reduce后的Mapper里过滤掉商品名长度大于3的数据
 */
Java代码 复制代码  收藏代码
  1. 预计处理完的结果是:  
  2. 手套  12  
  3. 订餐  2  
预计处理完的结果是:
手套	12
订餐	2


散仙的hadoop版本是1.2的,在1.2的版本里,hadoop支持新的API,但是链式的ChainMapper类和ChainReduce类却不支持新 的,新的在hadoop2.x里面可以使用,差别不大,散仙今天给出的是旧的API的,需要注意一下。
代码如下:

Java代码 复制代码  收藏代码
  1. package com.qin.test.hadoop.chain;  
  2.   
  3. import java.io.IOException;  
  4. import java.util.Iterator;  
  5.   
  6. import org.apache.hadoop.conf.Configuration;  
  7. import org.apache.hadoop.fs.FileSystem;  
  8. import org.apache.hadoop.fs.Path;  
  9. import org.apache.hadoop.io.LongWritable;  
  10. import org.apache.hadoop.io.Text;  
  11. import org.apache.hadoop.mapred.JobClient;  
  12. import org.apache.hadoop.mapred.JobConf;  
  13. import org.apache.hadoop.mapred.MapReduceBase;  
  14. import org.apache.hadoop.mapred.Mapper;  
  15. import org.apache.hadoop.mapred.OutputCollector;  
  16. import org.apache.hadoop.mapred.Reducer;  
  17. import org.apache.hadoop.mapred.Reporter;  
  18. import org.apache.hadoop.mapred.lib.ChainMapper;  
  19. import org.apache.hadoop.mapred.lib.ChainReducer;  
  20.    
  21.   
  22.    
  23.   
  24. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
  25. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;  
  26. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
  27. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;  
  28.   
  29. import com.qin.reducejoin.NewReduceJoin2;  
  30.    
  31.   
  32. /** 
  33.  *  
  34.  * 测试Hadoop里面的 
  35.  * ChainMapper和ReduceMapper的使用 
  36.  *  
  37.  * @author qindongliang 
  38.  * @date 2014年5月7日 
  39.  *  
  40.  * 大数据交流群:  376932160 
  41.  *  
  42.  *  
  43.  *  
  44.  *  
  45.  * ***/  
  46. public class HaoopChain {  
  47.       
  48. /**  
  49.  * 需求: 
  50.  * 在第一个Mapper里面过滤大于10000万的数据 
  51.  * 第二个Mapper里面过滤掉大于100-10000的数据 
  52.  * Reduce里面进行分类汇总并输出 
  53.  * Reduce后的Mapper里过滤掉商品名长度大于3的数据 
  54.  */  
  55.       
  56.       
  57.       
  58.       
  59.     /** 
  60.      *  
  61.      * 过滤掉大于10000万的数据 
  62.      *  
  63.      * */  
  64.     private static class AMapper01 extends MapReduceBase implements  Mapper<LongWritable, Text, Text, Text>{  
  65.           
  66.           
  67.      @Override  
  68.     public void map(LongWritable key, Text value, OutputCollector<Text, Text> output, Reporter reporter)  
  69.             throws IOException {  
  70.             String text=value.toString();  
  71.             String texts[]=text.split(" ");  
  72.               
  73.         System.out.println("AMapper01里面的数据: "+text);  
  74.         if(texts[1]!=null&&texts[1].length()>0){  
  75.         int count=Integer.parseInt(texts[1]);     
  76.         if(count>10000){  
  77.             System.out.println("AMapper01过滤掉大于10000数据:  "+value.toString());  
  78.             return;  
  79.         }else{  
  80.             output.collect(new Text(texts[0]), new Text(texts[1]));  
  81.               
  82.         }  
  83.               
  84.         }  
  85.     }  
  86.     }  
  87.       
  88.   
  89.     /** 
  90.      *  
  91.      * 过滤掉大于100-10000的数据 
  92.      *  
  93.      * */  
  94.     private static class AMapper02 extends MapReduceBase implements  Mapper<Text, Text, Text, Text>{  
  95.           
  96.      @Override  
  97.     public void map(Text key, Text value,  
  98.             OutputCollector<Text, Text> output, Reporter reporter)  
  99.             throws IOException {  
  100.            
  101.          int count=Integer.parseInt(value.toString());    
  102.             if(count>=100&&count<=10000){  
  103.                 System.out.println("AMapper02过滤掉的小于10000大于100的数据: "+key+"    "+value);  
  104.                 return;  
  105.             } else{  
  106.                   
  107.                 output.collect(key, value);  
  108.             }  
  109.           
  110.     }  
  111.     }   
  112.       
  113.       
  114.     /** 
  115.      * Reuduce里面对同种商品的 
  116.      * 数量相加数据即可 
  117.      *  
  118.      * **/  
  119.     private static class AReducer03 extends MapReduceBase implements Reducer<Text, Text, Text, Text>{  
  120.        
  121.         @Override  
  122.         public void reduce(Text key, Iterator<Text> values,  
  123.                 OutputCollector<Text, Text> output, Reporter reporter)  
  124.                 throws IOException {  
  125.             int sum=0;  
  126.              System.out.println("进到Reduce里了");  
  127.               
  128.             while(values.hasNext()){  
  129.                   
  130.                 Text t=values.next();  
  131.                 sum+=Integer.parseInt(t.toString());  
  132.                   
  133.             }  
  134.               
  135.             //旧API的集合,不支持foreach迭代  
  136. //          for(Text t:values){  
  137. //              sum+=Integer.parseInt(t.toString());  
  138. //          }  
  139.               
  140.             output.collect(key, new Text(sum+""));  
  141.               
  142.         }  
  143.           
  144.     }  
  145.       
  146.       
  147.     /*** 
  148.      *  
  149.      * Reduce之后的Mapper过滤 
  150.      * 过滤掉长度大于3的商品名 
  151.      *  
  152.      * **/  
  153.       
  154.     private static class AMapper04 extends MapReduceBase implements Mapper<Text, Text, Text, Text>{  
  155.        
  156.         @Override  
  157.         public void map(Text key, Text value,  
  158.                 OutputCollector<Text, Text> output, Reporter reporter)  
  159.                 throws IOException {  
  160.                
  161.               
  162.             int len=key.toString().trim().length();  
  163.               
  164.             if(len>=3){  
  165.                 System.out.println("Reduce后的Mapper过滤掉长度大于3的商品名: "+ key.toString()+"   "+value.toString());  
  166.                 return ;  
  167.             }else{  
  168.                 output.collect(key, value);  
  169.             }  
  170.               
  171.         }  
  172.           
  173.           
  174.     }  
  175.       
  176.       
  177.   
  178.      /*** 
  179.       * 驱动主类 
  180.       * **/  
  181.     public static void main(String[] args) throws Exception{  
  182.          //Job job=new Job(conf,"myjoin");  
  183.          JobConf conf=new JobConf(HaoopChain.class);   
  184.            conf.set("mapred.job.tracker","192.168.75.130:9001");  
  185.            conf.setJobName("t7");  
  186.             conf.setJar("tt.jar");  
  187.           conf.setJarByClass(HaoopChain.class);  
  188.              
  189.         //  Job job=new Job(conf, "2222222");  
  190.         // job.setJarByClass(HaoopChain.class);  
  191.          System.out.println("模式:  "+conf.get("mapred.job.tracker"));;  
  192.            
  193.         // job.setMapOutputKeyClass(Text.class);  
  194.         // job.setMapOutputValueClass(Text.class);  
  195.            
  196.            
  197.           //Map1的过滤  
  198.          JobConf mapA01=new JobConf(false);  
  199.          ChainMapper.addMapper(conf, AMapper01.class, LongWritable.class, Text.class, Text.class, Text.classfalse, mapA01);  
  200.            
  201.          //Map2的过滤  
  202.          JobConf mapA02=new JobConf(false);  
  203.          ChainMapper.addMapper(conf, AMapper02.class, Text.class, Text.class, Text.class, Text.classfalse, mapA02);  
  204.            
  205.            
  206.          //设置Reduce  
  207.          JobConf recduceFinallyConf=new JobConf(false);  
  208.          ChainReducer.setReducer(conf, AReducer03.class, Text.class, Text.class, Text.class, Text.classfalse, recduceFinallyConf);  
  209.           
  210.            
  211.         //Reduce过后的Mapper过滤  
  212.          JobConf  reduceA01=new  JobConf(false);  
  213.          ChainReducer.addMapper(conf, AMapper04.class, Text.class, Text.class, Text.class, Text.classtrue, reduceA01);  
  214.           
  215.           
  216.          conf.setOutputKeyClass(Text.class);  
  217.          conf.setOutputValueClass(Text.class);  
  218.    
  219.          conf.setInputFormat(org.apache.hadoop.mapred.TextInputFormat.class);  
  220.          conf.setOutputFormat(org.apache.hadoop.mapred.TextOutputFormat.class);  
  221.        
  222.            
  223.          FileSystem fs=FileSystem.get(conf);  
  224. //         
  225.          Path op=new Path("hdfs://192.168.75.130:9000/root/outputchain");          
  226.          if(fs.exists(op)){  
  227.              fs.delete(op, true);  
  228.              System.out.println("存在此输出路径,已删除!!!");  
  229.          }  
  230. //         
  231. //         
  232.             
  233.          org.apache.hadoop.mapred.FileInputFormat.setInputPaths(conf, new Path("hdfs://192.168.75.130:9000/root/inputchain"));  
  234.          org.apache.hadoop.mapred.FileOutputFormat.setOutputPath(conf, op);  
  235. //       
  236.       //System.exit(conf.waitForCompletion(true)?0:1);  
  237.         JobClient.runJob(conf);  
  238.           
  239.           
  240.     }  
  241.       
  242.       
  243.       
  244.       
  245.   
  246. }  
package com.qin.test.hadoop.chain;

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.lib.ChainMapper;
import org.apache.hadoop.mapred.lib.ChainReducer;
 

 

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import com.qin.reducejoin.NewReduceJoin2;
 

/**
 * 
 * 测试Hadoop里面的
 * ChainMapper和ReduceMapper的使用
 * 
 * @author qindongliang
 * @date 2014年5月7日
 * 
 * 大数据交流群:  376932160
 * 
 * 
 * 
 * 
 * ***/
public class HaoopChain {
	
/**	
 * 需求:
 * 在第一个Mapper里面过滤大于10000万的数据
 * 第二个Mapper里面过滤掉大于100-10000的数据
 * Reduce里面进行分类汇总并输出
 * Reduce后的Mapper里过滤掉商品名长度大于3的数据
 */
	
	
	
	
	/**
	 * 
	 * 过滤掉大于10000万的数据
	 * 
	 * */
	private static class AMapper01 extends MapReduceBase implements  Mapper<LongWritable, Text, Text, Text>{
		
		
	 @Override
	public void map(LongWritable key, Text value, OutputCollector<Text, Text> output, Reporter reporter)
			throws IOException {
			String text=value.toString();
			String texts[]=text.split(" ");
			
		System.out.println("AMapper01里面的数据: "+text);
	    if(texts[1]!=null&&texts[1].length()>0){
		int count=Integer.parseInt(texts[1]);	
		if(count>10000){
			System.out.println("AMapper01过滤掉大于10000数据:  "+value.toString());
			return;
		}else{
			output.collect(new Text(texts[0]), new Text(texts[1]));
			
		}
			
	    }
	}
	}
	

	/**
	 * 
	 * 过滤掉大于100-10000的数据
	 * 
	 * */
	private static class AMapper02 extends MapReduceBase implements  Mapper<Text, Text, Text, Text>{
		
	 @Override
	public void map(Text key, Text value,
			OutputCollector<Text, Text> output, Reporter reporter)
			throws IOException {
		 
		 int count=Integer.parseInt(value.toString());	
			if(count>=100&&count<=10000){
				System.out.println("AMapper02过滤掉的小于10000大于100的数据: "+key+"    "+value);
				return;
			} else{
				
				output.collect(key, value);
			}
		
	}
	} 
	
	
	/**
	 * Reuduce里面对同种商品的
	 * 数量相加数据即可
	 * 
	 * **/
	private static class AReducer03 extends MapReduceBase implements Reducer<Text, Text, Text, Text>{
	 
		@Override
		public void reduce(Text key, Iterator<Text> values,
				OutputCollector<Text, Text> output, Reporter reporter)
				throws IOException {
			int sum=0;
			 System.out.println("进到Reduce里了");
			
			while(values.hasNext()){
				
				Text t=values.next();
				sum+=Integer.parseInt(t.toString());
				
			}
			
			//旧API的集合,不支持foreach迭代
//			for(Text t:values){
//				sum+=Integer.parseInt(t.toString());
//			}
			
			output.collect(key, new Text(sum+""));
			
		}
		
	}
	
	
	/***
	 * 
	 * Reduce之后的Mapper过滤
	 * 过滤掉长度大于3的商品名
	 * 
	 * **/
	
	private static class AMapper04 extends MapReduceBase implements Mapper<Text, Text, Text, Text>{
	 
		@Override
		public void map(Text key, Text value,
				OutputCollector<Text, Text> output, Reporter reporter)
				throws IOException {
			 
			
			int len=key.toString().trim().length();
			
			if(len>=3){
				System.out.println("Reduce后的Mapper过滤掉长度大于3的商品名: "+ key.toString()+"   "+value.toString());
				return ;
			}else{
				output.collect(key, value);
			}
			
		}
		
		
	}
	
	

	 /***
	  * 驱动主类
	  * **/
	public static void main(String[] args) throws Exception{
		 //Job job=new Job(conf,"myjoin");
		 JobConf conf=new JobConf(HaoopChain.class); 
		   conf.set("mapred.job.tracker","192.168.75.130:9001");
		   conf.setJobName("t7");
		    conf.setJar("tt.jar");
		  conf.setJarByClass(HaoopChain.class);
		   
		//  Job job=new Job(conf, "2222222");
		// job.setJarByClass(HaoopChain.class);
		 System.out.println("模式:  "+conf.get("mapred.job.tracker"));;
		 
		// job.setMapOutputKeyClass(Text.class);
		// job.setMapOutputValueClass(Text.class);
		 
		 
		  //Map1的过滤
		 JobConf mapA01=new JobConf(false);
		 ChainMapper.addMapper(conf, AMapper01.class, LongWritable.class, Text.class, Text.class, Text.class, false, mapA01);
		 
		 //Map2的过滤
		 JobConf mapA02=new JobConf(false);
		 ChainMapper.addMapper(conf, AMapper02.class, Text.class, Text.class, Text.class, Text.class, false, mapA02);
		 
		 
		 //设置Reduce
		 JobConf recduceFinallyConf=new JobConf(false);
		 ChainReducer.setReducer(conf, AReducer03.class, Text.class, Text.class, Text.class, Text.class, false, recduceFinallyConf);
		
		 
		//Reduce过后的Mapper过滤
		 JobConf  reduceA01=new  JobConf(false);
		 ChainReducer.addMapper(conf, AMapper04.class, Text.class, Text.class, Text.class, Text.class, true, reduceA01);
		
		
		 conf.setOutputKeyClass(Text.class);
		 conf.setOutputValueClass(Text.class);
 
		 conf.setInputFormat(org.apache.hadoop.mapred.TextInputFormat.class);
		 conf.setOutputFormat(org.apache.hadoop.mapred.TextOutputFormat.class);
	 
		 
		 FileSystem fs=FileSystem.get(conf);
//		 
		 Path op=new Path("hdfs://192.168.75.130:9000/root/outputchain");		 
		 if(fs.exists(op)){
			 fs.delete(op, true);
			 System.out.println("存在此输出路径,已删除!!!");
		 }
//		 
//		 
		  
		 org.apache.hadoop.mapred.FileInputFormat.setInputPaths(conf, new Path("hdfs://192.168.75.130:9000/root/inputchain"));
		 org.apache.hadoop.mapred.FileOutputFormat.setOutputPath(conf, op);
//	   
	  //System.exit(conf.waitForCompletion(true)?0:1);
		JobClient.runJob(conf);
		
		
	}
	
	
	
	

}





运行日志如下:

Java代码 复制代码  收藏代码
  1. 模式:  192.168.75.130:9001  
  2. 存在此输出路径,已删除!!!  
  3. WARN - JobClient.copyAndConfigureFiles(746) | Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.  
  4. WARN - NativeCodeLoader.<clinit>(52) | Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
  5. WARN - LoadSnappy.<clinit>(46) | Snappy native library not loaded  
  6. INFO - FileInputFormat.listStatus(199) | Total input paths to process : 1  
  7. INFO - JobClient.monitorAndPrintJob(1380) | Running job: job_201405072054_0009  
  8. INFO - JobClient.monitorAndPrintJob(1393) |  map 0% reduce 0%  
  9. INFO - JobClient.monitorAndPrintJob(1393) |  map 50% reduce 0%  
  10. INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 0%  
  11. INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 33%  
  12. INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 100%  
  13. INFO - JobClient.monitorAndPrintJob(1448) | Job complete: job_201405072054_0009  
  14. INFO - Counters.log(585) | Counters: 30  
  15. INFO - Counters.log(587) |   Job Counters   
  16. INFO - Counters.log(589) |     Launched reduce tasks=1  
  17. INFO - Counters.log(589) |     SLOTS_MILLIS_MAPS=11357  
  18. INFO - Counters.log(589) |     Total time spent by all reduces waiting after reserving slots (ms)=0  
  19. INFO - Counters.log(589) |     Total time spent by all maps waiting after reserving slots (ms)=0  
  20. INFO - Counters.log(589) |     Launched map tasks=2  
  21. INFO - Counters.log(589) |     Data-local map tasks=2  
  22. INFO - Counters.log(589) |     SLOTS_MILLIS_REDUCES=9972  
  23. INFO - Counters.log(587) |   File Input Format Counters   
  24. INFO - Counters.log(589) |     Bytes Read=183  
  25. INFO - Counters.log(587) |   File Output Format Counters   
  26. INFO - Counters.log(589) |     Bytes Written=19  
  27. INFO - Counters.log(587) |   FileSystemCounters  
  28. INFO - Counters.log(589) |     FILE_BYTES_READ=57  
  29. INFO - Counters.log(589) |     HDFS_BYTES_READ=391  
  30. INFO - Counters.log(589) |     FILE_BYTES_WRITTEN=174859  
  31. INFO - Counters.log(589) |     HDFS_BYTES_WRITTEN=19  
  32. INFO - Counters.log(587) |   Map-Reduce Framework  
  33. INFO - Counters.log(589) |     Map output materialized bytes=63  
  34. INFO - Counters.log(589) |     Map input records=10  
  35. INFO - Counters.log(589) |     Reduce shuffle bytes=63  
  36. INFO - Counters.log(589) |     Spilled Records=8  
  37. INFO - Counters.log(589) |     Map output bytes=43  
  38. INFO - Counters.log(589) |     Total committed heap usage (bytes)=336338944  
  39. INFO - Counters.log(589) |     CPU time spent (ms)=1940  
  40. INFO - Counters.log(589) |     Map input bytes=122  
  41. INFO - Counters.log(589) |     SPLIT_RAW_BYTES=208  
  42. INFO - Counters.log(589) |     Combine input records=0  
  43. INFO - Counters.log(589) |     Reduce input records=4  
  44. INFO - Counters.log(589) |     Reduce input groups=3  
  45. INFO - Counters.log(589) |     Combine output records=0  
  46. INFO - Counters.log(589) |     Physical memory (bytes) snapshot=460980224  
  47. INFO - Counters.log(589) |     Reduce output records=2  
  48. INFO - Counters.log(589) |     Virtual memory (bytes) snapshot=2184105984  
  49. INFO - Counters.log(589) |     Map output records=4  
模式:  192.168.75.130:9001
存在此输出路径,已删除!!!
WARN - JobClient.copyAndConfigureFiles(746) | Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
WARN - NativeCodeLoader.<clinit>(52) | Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARN - LoadSnappy.<clinit>(46) | Snappy native library not loaded
INFO - FileInputFormat.listStatus(199) | Total input paths to process : 1
INFO - JobClient.monitorAndPrintJob(1380) | Running job: job_201405072054_0009
INFO - JobClient.monitorAndPrintJob(1393) |  map 0% reduce 0%
INFO - JobClient.monitorAndPrintJob(1393) |  map 50% reduce 0%
INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 0%
INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 33%
INFO - JobClient.monitorAndPrintJob(1393) |  map 100% reduce 100%
INFO - JobClient.monitorAndPrintJob(1448) | Job complete: job_201405072054_0009
INFO - Counters.log(585) | Counters: 30
INFO - Counters.log(587) |   Job Counters 
INFO - Counters.log(589) |     Launched reduce tasks=1
INFO - Counters.log(589) |     SLOTS_MILLIS_MAPS=11357
INFO - Counters.log(589) |     Total time spent by all reduces waiting after reserving slots (ms)=0
INFO - Counters.log(589) |     Total time spent by all maps waiting after reserving slots (ms)=0
INFO - Counters.log(589) |     Launched map tasks=2
INFO - Counters.log(589) |     Data-local map tasks=2
INFO - Counters.log(589) |     SLOTS_MILLIS_REDUCES=9972
INFO - Counters.log(587) |   File Input Format Counters 
INFO - Counters.log(589) |     Bytes Read=183
INFO - Counters.log(587) |   File Output Format Counters 
INFO - Counters.log(589) |     Bytes Written=19
INFO - Counters.log(587) |   FileSystemCounters
INFO - Counters.log(589) |     FILE_BYTES_READ=57
INFO - Counters.log(589) |     HDFS_BYTES_READ=391
INFO - Counters.log(589) |     FILE_BYTES_WRITTEN=174859
INFO - Counters.log(589) |     HDFS_BYTES_WRITTEN=19
INFO - Counters.log(587) |   Map-Reduce Framework
INFO - Counters.log(589) |     Map output materialized bytes=63
INFO - Counters.log(589) |     Map input records=10
INFO - Counters.log(589) |     Reduce shuffle bytes=63
INFO - Counters.log(589) |     Spilled Records=8
INFO - Counters.log(589) |     Map output bytes=43
INFO - Counters.log(589) |     Total committed heap usage (bytes)=336338944
INFO - Counters.log(589) |     CPU time spent (ms)=1940
INFO - Counters.log(589) |     Map input bytes=122
INFO - Counters.log(589) |     SPLIT_RAW_BYTES=208
INFO - Counters.log(589) |     Combine input records=0
INFO - Counters.log(589) |     Reduce input records=4
INFO - Counters.log(589) |     Reduce input groups=3
INFO - Counters.log(589) |     Combine output records=0
INFO - Counters.log(589) |     Physical memory (bytes) snapshot=460980224
INFO - Counters.log(589) |     Reduce output records=2
INFO - Counters.log(589) |     Virtual memory (bytes) snapshot=2184105984
INFO - Counters.log(589) |     Map output records=4




产生的数据如下:







总结,测试过程中,发现如果Reduce后面,还有Mapper执行,那么注意一定要,在ChainReducer里面先set一个全局唯一的Reducer,然后再add一个Mapper,否则,在运行的时候,会报空指针异常,这一点需要特别注意!

猜你喜欢

转载自weitao1026.iteye.com/blog/2267049
今日推荐