hadoop学习笔记(四):java api 操作hdfs

HDFS的Java访问接口 

  1)org.apache.hadoop.fs.FileSystem
    是一个通用的文件系统API,提供了不同文件系统的统一访问方式。
  2)org.apache.hadoop.fs.Path
    是Hadoop文件系统中统一的文件或目录描述,类似于java.io.File对本地文件系统的文件或目录描述。
  3)org.apache.hadoop.conf.Configuration
    读取、解析配置文件(如core-site.xml/hdfs-default.xml/hdfs-site.xml等),或添加配置的工具类
  4)org.apache.hadoop.fs.FSDataOutputStream
    对Hadoop中数据输出流的统一封装
  5)org.apache.hadoop.fs.FSDataInputStream
    对Hadoop中数据输入流的统一封装

Java访问HDFS主要编程步骤 

    1)构建Configuration对象,读取并解析相关配置文件
    Configuration conf=new Configuration();
  2)设置相关属性
    conf.set("fs.defaultFS","hdfs://1IP:9000");
  3)获取特定文件系统实例fs(以HDFS文件系统实例)
    FileSystem fs=FileSystem.get(new URI("hdfs://IP:9000"),conf,“hdfs");
  4)通过文件系统实例fs进行文件操作(以删除文件实例)
    fs.delete(new Path("/user/liuhl/someWords.txt"));

1、新建mave项目:hadoop-hdfs-demo。

pom.xml如下:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.hadoop.demo</groupId>
    <artifactId>hadoop-hdfs-demo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-auth</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
        <dependency>
            <groupId>commons-logging</groupId>
            <artifactId>commons-logging</artifactId>
            <version>1.2</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.10</version>
        </dependency>
    </dependencies>

    
</project>

2、新建连接hadoop的类

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;

import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;

public class ConnectHadoop {
    public static FileSystem getHadoopFileSystem() {
        FileSystem fs = null;
        Configuration conf = null;
        //此时的conf不需任何设置,只需读取远程的配置文件即可
        conf = new Configuration();
        // Hadoop的用户名,master机器的登录用户
        String hdfsUserName = "root";

        URI hdfsUri = null;
        try {
            // HDFS的访问路径
            hdfsUri = new URI("hdfs://192.168.137.100:9000");
        } catch (URISyntaxException e) {
            e.printStackTrace();
        }

        try {
            // 根据远程的NN节点,获取配置信息,创建HDFS对象
            fs = FileSystem.get(hdfsUri,conf,hdfsUserName);
        } catch (IOException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        return fs;
    }

}
hdfs://192.168.137.100:9000,是master节点下的core-site.xml的配置
 

猜你喜欢

转载自www.cnblogs.com/boshen-hzb/p/9097402.html