通过hadoop api写HDFS(JAVA版)

代码如下:

package com.hadoop.cluster;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HdfsOps {

    public static void main(String[] args) throws Exception {
        
        WriteHdfs();

    }
    
    public static void WriteHdfs() throws Exception{
        Configuration configuration = new Configuration();
        FileSystem fSystem = FileSystem.get(configuration);
        Path path = new Path("hdfs://namenode:8020/data/test.txt");
        FSDataOutputStream stream = fSystem.create(path);
        stream.write("test hdfs api write".getBytes());
        stream.close();
        
    }

}

运行过程报如下错误:

Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://namenode:8020/data, expected: file:///

原因是Configuration默认读取类路径下core-default.xml, 但是core-default.xml默认配置的fs路径是本地(file:///)

解决方法如下:

在src目录下创建core-site.xml文件

文件内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://namenode:8020</value>
    </property>
</configuration>

猜你喜欢

转载自blog.csdn.net/lonewolf1992/article/details/88954079