一、配置windows下的Hadoop包
-
下载Hadoop安装包(可将之前在linux下安装的压缩包直接解压用)
下载链接:https://hadoop.apache.org/releases.html -
解压至非中文路径
二、配置环境变量
配置HADOOP_HOME环境变量;
配置Path环境变量:
三、使用Idea创建Maven工程
- 创建好后,导入依赖:
<dependencies>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.mrunit/mrunit -->
<dependency>
<groupId>org.apache.mrunit</groupId>
<artifactId>mrunit</artifactId>
<version>1.1.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j</artifactId>
<version>2.5</version>
<type>pom</type>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.0.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
- 需要在项目的src/main/resources目录下,新建一个文件,命名为“log4j.properties”,在文件中填入
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
- 创建HdfsClientTest测试类
代码如下:
public class HdfsClientTest {
@Test
public void testMkdirs() throws IOException, InterruptedException, URISyntaxException {
//配置Hadoop用户名
System.setProperty("HADOOP_USER_NAME","root");
// 1 获取文件系统
Configuration conf = new Configuration();
// 配置在集群上运行
conf.set("fs.defaultFS","hdfs://192.168.137.150:9000");
FileSystem fs = FileSystem.get(conf);
// 2 创建目录
fs.mkdirs(new Path("/hadoop/test"));
// 3 关闭资源
fs.close();
}
}
创建结果:
客户端去操作HDFS时,是有一个用户身份的。默认情况下,HDFS客户端API会从JVM中获取一个参数来作为自己的用户身份:-DHADOOP_USER_NAME=hadoop,hadoop为用户名称。这里我是直接使用了root用户,用代码进行配置:System.setProperty("HADOOP_USER_NAME","root");