Examples of Hadoop MapReduce implemented using Java code request value π

Demand: If there is an edge length of a square. End to a square as the center to a radius, draw a circular arc, so there is a sector in the square at right angles. Generating a plurality of random dot within a square, a certain point within the sector, some point is outside the sector. Area of the square is 1 , fan-shaped area of 0.25 * Pi . Suppose the number of points of a total of n- , the number of points within the sector is NC , at point enough dense enough, will approximately have nc / n ratio is approximately equal to the ratio of fan area to the area of the square, i.e. nc / n = Pi * 0.25 /. 1 , i.e., Pi * NC =. 4 / n- .

The first is the problem of randomly generated, using the Halton randomly generated sequence algorithm sample points very uniform, high accuracy, the effect is better.

Here is an algorithm to find online Halton sequence to generate sample points random code:

 

public class Pi {
    static int digit = 40;
    private int[] bases= new int[2];
    private double[] baseDigit = new double[2];
    private double[][] background = new double[2][digit];
    private long index;
    
    Pi(int[] base) {
        bases = base.clone();
        index = 0;
 
        for(int i=0; i<bases.length; i++) {
            double b = 1.0/bases[i];
            baseDigit[i] = b;
            for(int j=0; j<digit; j++) {
                background[i][j] = j == 0 ? b : background[i][j-1]*b;
            }
        }
    }
    
    double[] getNext() {
        index++;
        
        double[] result = {0,0};
 
        for(int i=0; i<bases.length; i++) {
            long num = index;
            int j = 0;
            while(num != 0) {
                result[i] += num % bases[i] * background[i][j++];
                num /= bases[i];
            }
        }
        
        return result;
    }
    
    public static void main(String[] args) {
        int[] base = {2,5};
        Pi test = new Pi(base);
        for(int x = 0; x < 100; x++){
            double[] t = test.getNext();
            System.out.println(t[0] + "\t" + t[1]);
        }
        
    }


}

 

Here is the code value calculated π:

package mapreduce;

import java.io.IOException;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import mapreduce.Pi;// The following generates a random number when the need for this class, i.e., upper portion of the code 

/ ** 
 * 
 * @author Sakura 
 * 2019.9.3 
 * π value is calculated using the MapReduce 
 * 
 * / 
public  class CalPI {
     public  static  class PiMapper the extends Mapper <Object, the Text, the Text, IntWritable> { 

        int Number = 0; // define a variable used to store the total number of points generated 
        
        // read the file, each row is a map of this program read ten rows, each line is 100000 
        public  void Map (Object Key, the Text value, the context context) throws IOException, InterruptedException {
             int= the Integer.parseInt pointNum (value.toString ()); // read into the line that is assigned to pointNum 
            Number pointNum + = Number; // total number of points assigned to Number 
             int [] = {2,5} Base; // generates a random point 
            Pi = Test new new Pi (Base); // generate random points used 
            for ( int X = 0; X <Number; X ++) { // cycle to generate a random point 
                Double [] T = test.getNext ( ); // randomly generating point, and the coordinates into an array 
                System.out.println (T [0] + "\ T" T + [. 1]); // console output of the random point coordinates 
                IntWritable Result = new new IntWritable (0); // define the output value 
                if((T [0] * T [0] T + [. 1] * T [. 1]) <=. 1) // determines whether the point generated within the sector area 
                { 
                    Result = new new IntWritable (. 1); // if , the output value is assigned. 1 
                } 
                value.Set the (String.valueOf (Number)); // define the output key, the key is the number of the output current generating points 
                context.write (value, Result); // write 
            } 
        } 
    } 

    public  static  class PiReducer the extends the Reducer <the Text, IntWritable, the Text, DoubleWritable> {
         Private DoubleWritable Result = new new DoubleWritable (); //Statement output value 

        public  void the reduce (the Text Key, the Iterable <IntWritable> values, the Context context) throws IOException, InterruptedException { 

            Double pointNum = Double.parseDouble (key.toString ()); // Get key input 
            Double SUM = 0; / / definitions Total 
            for (IntWritable Val: values) { // cycle values from the values, the accumulated sum is assigned to SUM 
                SUM + = val.get (); 
            } 
            result.set (SUM / * pointNum. 4); // calculated π is assigned to a value obtained Result 
            
            context.write (Key, Result); // key-value, i.e., the total number of generated points, and Result, i.e. π value calculated as a key write context 
        } 
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf,"calculate pi");
        job.setJarByClass(CalPI.class);
        job.setMapperClass(PiMapper.class);
        job.setReducerClass(PiReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass (. DoubleWritable class ); 

        Path in   =   new new   Path ( "HDFS: //192.168.68.130: 9000 / the User / hadoop / nai.txt");   // read the file Address 
        Path OUT = new new Path ( "HDFS : //192.168.68.130: 9000 / the User / hadoop / OUTPUT4 ");   // output file address, output4 not exist 
        FileInputFormat.addInputPath (the Job, in); 
        FileOutputFormat.setOutputPath (the Job, OUT); 
        System.exit (the Job. the waitForCompletion ( to true ) 0:. 1? );   
  
    } 


}

 

Guess you like

Origin www.cnblogs.com/sakura--/p/11455467.html