HDFS and MapReduce on Yarn in Hadoop

Why use Hadoop

Large amounts of data, if needed computing (CPU intensive) and fast processing result obtained using conventional practice (eg: single node threads concurrently executing, can achieve a full CPU utilization) can not achieve quick results; this when you need to use multiple processes, and it distributed across multiple nodes, so multiple CPU to perform, to achieve a compute (CPU intensive) and fast processing purposes.

To solve the problem:

HDFS (Hadoop Distributed File System, Hadoop distributed data storage): a large amount of data stored in each node to

The MapReduce (distributed data analysis model): The model program to write, then the scheduler to Yarn, scheduling is done to all nodes

Yarn (Management Resource Scheduling): assign to each node to jar package, and apply some resources in the resource (referred to as a container) to run the jar

Specific application functions scenarios:

Massive log files for analysis

FIG HDFS data write process:

NameNode: management node, the location information is stored in a file on the DataNode

DataNode: working node, after splitting each file storage

 

Spring Boot operation hdfs tools (Source Address: https://gitee.com/SnailPu/springBootDemo ):

 

/**
 * 在对hdfs进行操作时,因为Windows下的用户原因,发生异常(org.apache.hadoop.security.AccessControlException),需要对hdfs权限设置
 * 参考文章:https://blog.csdn.net/wang7807564/article/details/74627138
 */
@Component
public class HdfsUtils {

    @Value("${hdfs.path}")
    private String hdfsPath;
    @Value("${hdfs.username}")
    private String hdfsUsername;
    private static final int bufferSize = 1024 * 1024 * 64;

    /**
     * 获取HDFS配置信息
     */
    private Configuration getConfiguration() {
        Configuration configuration = new Configuration();
        //使用Hadoop的core-site中的fs.defaultFS参数,防止...file///...错误的出现
        configuration.set("fs.defaultFS", hdfsPath);
        return configuration;
    }

    /**
     * 获取HDFS文件系统对象
     */
    public FileSystem getFileSystem() throws Exception {
        // 客户端去操作hdfs时是有一个用户身份的,默认情况下hdfs客户端api会从jvm中获取一个参数作为自己的用户身份
        // DHADOOP_USER_NAME=hadoop
        // 也可以在构造客户端fs对象时,通过参数传递进去
//        FileSystem fileSystem = FileSystem.get(new URI(hdfsPath), getConfiguration(), hdfsName);
        FileSystem fileSystem = FileSystem.get(getConfiguration());
        return fileSystem;
    }

    /**
     * 拼接路径为hdfs中的
     *
     * @param path 路径参数
     */
    public String pathInHdfs(String path) {
        return hdfsPath + path;
    }

    /**
     * 创建目录
     *
     * @param path
     * @return
     * @throws Exception
     */
    public boolean mkdir(String path) throws Exception {

        FileSystem fs = getFileSystem();
        String pathInHdfs = pathInHdfs(path);
        boolean b = fs.mkdirs(new Path(pathInHdfs));
        return b;
    }

    /**
     * 判断HDFS文件或目录是否存在,使用新创建的fs
     *
     * @param path
     * @return
     * @throws Exception
     */
    public boolean exits(String path) throws Exception {
        if (StringUtils.isEmpty(path)) {
            return false;
        }
        FileSystem fs = getFileSystem();
        try {
            Path srcPath = new Path(pathInHdfs(path));
            boolean isExists = fs.exists(srcPath);
            return isExists;
        } finally {
            fs.close();
        }
    }

    /**
     * 判断HDFS文件或目录是否存在,使用外部传入的fs,不关闭,由外部方法关闭
     * 重载 exits
     *
     * @param path
     * @return
     * @throws Exception
     */
    public boolean exits(String path, FileSystem fs) throws Exception {
        if (StringUtils.isEmpty(path)) {
            return false;
        }
        Path srcPath = new Path(pathInHdfs(path));
        boolean isExists = fs.exists(srcPath);
        return isExists;
    }

    /**
     * 删除HDFS文件或目录
     *
     * @param path
     * @return
     * @throws Exception
     */
    public Boolean deleteFile(String path) throws Exception {
        if (StringUtils.isEmpty(path)) {
            return false;
        }
        FileSystem fs = getFileSystem();
        if (!exits(path, fs)) {
            return false;
        }
        try {
            Path srcPath = new Path(pathInHdfs(path));
            boolean isOk = fs.deleteOnExit(srcPath);
            return isOk;
        } finally {
            fs.close();
        }
    }
}

Get the source of the general process fileSystem:

MapReduce in Job submission workflow:

  • ResourceManger: responsible for the management of cluster resources and Job scheduling, registration, etc.

  • NodeManger: monitor resource usage execution Job containers, and report to ResourceManger

  • yarn在的集群中有resourceManger和nodeManger进程,负责完成对资源的调度分配(container硬件资源,文件资源)。yarn这样的设计,是为了承载更多的运算方式,如MapReduce,spark,strom。
  • MapReduce负责程序的具体运行,MRAppMaster决定不同的机器运行完成map或者reduce任务
  • 提交运行过程中,会依次增加RunJar,MRAppMaster,YarnChild进程

yarn资源调度器队列介绍与配置参考:http://itxw.net/article/376.html

MRAPPMaster与Map、Reduce间的关系和工作流程

 

(持续更新,敬请期待!!8.9)

发布了91 篇原创文章 · 获赞 54 · 访问量 1万+

Guess you like

Origin blog.csdn.net/BigBug_500/article/details/96461713