HDFS common API operations

HDFS common API operations

A Xiaobai who just started to learn big data is willing to share what he has learned with everyone.

1. put: file upload corresponds to hadoop fs -put (equivalent to hadoop fs -copyFromLocal)

//上传文件
    @Test
    public void put() throws IOException, InterruptedException {
    
    
        //与hdfs服务器建立连接
        FileSystem fs = FileSystem.get(URI.create("hdfs://hadoop102:9000"),new Configuration(),"zmk");

        //用fs进行操作
        fs.copyFromLocalFile(new Path("d:\\hdfsput.txt"),new Path("/"));

        //关闭连接
        fs.close();
    }

As you can see from the above, a connection needs to be established before each operation. In order to simplify the need to establish a connection each time, we use the @Before annotation to write the connection establishment, so that the @Before annotation will be executed every time @Test is executed. For the same reason, we will release the connection and write it in @After.

	private FileSystem fs;
@Before
    public void before() throws IOException, InterruptedException {
    
    
        fs = FileSystem.get(URI.create("hdfs://hadoop102:9000"), new Configuration(), "zmk");
        System.out.println("Before执行了!!!!");
    }
@After
    public void after() throws IOException {
    
    
        fs.close();//关流
        System.out.println("After执行了!!!!");
    }

2. copyToLocalFile: file download corresponds to hadoop fs -get (equivalent to hadoop fs -copyToFile)

//下载文件到本地
    @Test
    public void get() throws IOException {
    
    
        fs.copyToLocalFile(new Path("/wcoutput"),new Path("d:\\"));
    }

3. mkdirs: create a folder corresponding to hadoop fs -mkdir

//创建文件夹
    @Test
    public void make() throws IOException {
    
    
        fs.mkdirs(new Path("/aaa"));
    }

4. delete: delete operation corresponds to hadoop fs -rm

//删除hdfs上的文件
    @Test
    public void delete() throws IOException {
    
    
        boolean flag = fs.delete(new Path("/hdfsput.txt"), true);
        if(flag)
            System.out.println("删除成功");
        else
            System.out.println("删除失败");
    }

5. The rename operation corresponds to hadoop fs -mv

//更改文件名
    @Test
    public void rename() throws IOException {
    
    
        fs.rename(new Path("/hdfsput.txt"),new Path("/hdfsrename.txt"));
    }

6. listFile: view file details

//HDFS文件详情查看
    //查看文件名称、权限、长度、块信息
    @Test
    public void listFile() throws IOException {
    
    
        RemoteIterator<LocatedFileStatus> fileStatus = fs.listFiles(new Path("/"), true);//查看hdfs根目录下所有文件详细信息  返回迭代器对象
        while(fileStatus.hasNext()){
    
    
            LocatedFileStatus file = fileStatus.next();
            System.out.println("文件路径信息:"+file.getPath());
            System.out.println("所有者:"+file.getOwner());
            BlockLocation[] blockLocations = file.getBlockLocations();//获取存储的块信息
            for (BlockLocation blockLocation : blockLocations) {
    
    
                System.out.println("块信息:"+blockLocation);
                String[] hosts = blockLocation.getHosts();
                for (String host : hosts) {
    
    
                    System.out.println("块在主机"+host+"上");
                }
            }
        }
    }

7, listStatus: determine whether it is a file

//判断是文件还是文件夹
    @Test
    public void liststatus() throws IOException {
    
    
        FileStatus[] statuses = fs.listStatus(new Path("/"));
        for (FileStatus status : statuses) {
    
    
            if (status.isFile()){
    
    
                System.out.println("是文件,该文件路径为:"+status.getPath());
            }else{
    
    
                System.out.println("是文件夹,该文件夹路径为:"+status.getPath());
            }
        }
    }

Thank you for watching, please point out if you have any questions

Guess you like

Origin blog.csdn.net/qq_40169189/article/details/105546278