Maven custom skeleton

First, why should a custom skeleton

Programmers according to their needs, define the Maven Archetype (skeleton), the follow-up skeleton select Custom, you can put what we need pom, other configuration files, code skeleton is automatically generated, simplify development and testing .

Second, how to customize the skeleton

Let us customize a MapReduce framework as an example
1. Create a template module
Here Insert Picture Description
Here Insert Picture Description

  1. Creating Java code
  • pom.xml file, add
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>2.5.2</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>2.5.2</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.5.2</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-mapreduce-client-core</artifactId>
      <version>2.5.2</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-common</artifactId>
      <version>2.5.2</version>
</dependency>
  • MR must be run on a cluster yarn
  • Maven based build tool, simplify MR in the yarn running cluster
<!--jar(main函数)   上传yarn 远程执行 bin/yarn jar xxx.jar -->
 <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-jar-plugin</artifactId>
         <version>2.3.2</version>
         <configuration>
           <outputDirectory>${basedir}</outputDirectory>
           <archive>
             <manifest>
               <mainClass>${mainClass}</mainClass>
             </manifest>
           </archive>
         </configuration>
  </plugin>

 <extensions>
      <extension>
        <groupId>org.apache.maven.wagon</groupId>
        <artifactId>wagon-ssh</artifactId>
        <version>2.8</version>
      </extension>
 </extensions>

 <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>wagon-maven-plugin</artifactId>
         <version>1.0</version>
         <configuration>
           <fromFile>${project.build.finalName}.jar</fromFile>
           <url>scp://root:123456@${target-host}${target-position}</url>
           <commands>
             <command>pkill -f ${project.build.finalName}.jar</command>
             <command>nohup /opt/install/hadoop-2.5.2/bin/yarn jar ${target-position}/${project.build.finalName}.jar > /root/nohup.out 2>&amp;1 &amp;</command> 
           </commands>
           <!-- 显示运行命令的输出结果 -->
           <displayCommandOutputs>true</displayCommandOutputs>
         </configuration>
 </plugin>
  • Code structure
/*1. Map*/
public class MyMaper extends Mapper<LongWritable, Text, Text, IntWritable>{
       
        @Override
        protected void map(LongWritable k1, Text v1, Context context) throws IOException, InterruptedException {
              //todo
                }

        }
    }

/*2. Reduce*/
public class MyReduce extends Reducer<Text, IntWritable, Text, IntWritable> {
        @Override
        protected void reduce(Text k2, Iterable<IntWritable> v2s, Context context) throws IOException, InterruptedException {
            //todo
        }
    }

/*3. Job*/
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "MyFirstJob");
        //作业以jar包形式  运行
        job.setJarByClass(MyMapReduce.class);

        // InputFormat
        Path path = new Path("/src/data");
        TextInputFormat.addInputPath(job,path);

        //Map
        job.setMapperClass(MyMaper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);

        //shuffle 默认的方式处理 无需设置

        //reduce
        job.setReducerClass(MyReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        //输出目录一定不能存在,由MR动态创建
        Path out = new Path("/dest2");
        FileSystem fileSystem = FileSystem.get(conf);
        fileSystem.delete(out,true);
        TextOutputFormat.setOutputPath(job,out);
        //运行job作业 
        job.waitForCompletion(true);

3. At the root of this project:

mvn --settings D:\apache-maven-3.6.1\conf\settings.xml archetype:create-from-project

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
4. Copy skeleton coordinates (to facilitate subsequent installation)
Here Insert Picture Description

  <groupId>com.baidu</groupId>
  <artifactId>hadoop-mr-archetype</artifactId>
	  <version>1.0-SNAPSHOT</version>

5. Installation backbone

cd target\generated-sources\archetype
mvn clean install

Here Insert Picture Description
6. Create a skeleton and the introduction of
Here Insert Picture Description
this time, the project will create out with pom files you depend on the configuration of the MapReduce model class and build yourself.

Third, how to remove useless idea maven skeleton of it?

  • IntelliJ IDEA find a skeleton profile
大概就是这个位置:
C:\Users${user}.IntelliJIdea${version}\system\Maven\Indices
这里面有个文件 UserArchetypes.xml,打开之后看见如下结果

Here Insert Picture Description
You can delete unwanted custom skeleton. If all useless, you can delete the entire file. Then restart the IDEA, you will find deleted successfully.

Published 24 original articles · won praise 1 · views 506

Guess you like

Origin blog.csdn.net/Mr_YXX/article/details/104940906