Hadoop large data technologies HDFS (4) - HDFS to build and test the client

Chapter IV: HDFS client to build and test

4.1 Test Connection virtual machine

Step 1: Create forms with IDEA Maven Java project

Here Insert Picture Description
Here Insert Picture Description

Step two: add Maven relies

Add coordinates HDFS in pom.xml, some coordinates impossible, but we have to use the back of the project, first of all can add to the mix

<dependencies>
    <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
         <version>4.12</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-client</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-hdfs</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-client</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-core</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>net.minidev</groupId>
         <artifactId>json-smart</artifactId>
         <version>2.3</version>
     </dependency>
     <dependency>
         <groupId>org.apache.logging.log4j</groupId>
         <artifactId>log4j-core</artifactId>
         <version>2.12.1</version>
     </dependency>
     <dependency>
         <groupId>org.anarres.lzo</groupId>
         <artifactId>lzo-hadoop</artifactId>
         <version>1.0.6</version>
     </dependency>
</dependencies>

Step Three: Add the log file in the resources directory

log4j.properties

log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

Step four: Test java code

public class HDFSTest {
    /**
     * 测试java连接Hadoop的HDFS
     * @throws URISyntaxException
     * @throws IOException
     */
    @Test
    public void test() throws URISyntaxException, IOException {
    	//虚拟机连接名,必须在本地配置域名,不然只能IP地址访问
        String hdfs = "hdfs://hadoop101:9000";
        // 1 获取文件系统
        Configuration cfg = new Configuration ();
        FileSystem fs = FileSystem.get (new URI (hdfs), cfg);

        System.out.println (cfg);
        System.out.println (fs);
        System.out.println("HDFS开启了!!!");
    }
}

Step Five: Run get the result (for success)

Here Insert Picture Description

Published 37 original articles · won praise 7 · views 1181

Guess you like

Origin blog.csdn.net/zy13765287861/article/details/104642666