hadoop入门级小脚本训练熟悉hadoop基本原理

开头批注:这个教程是我上一个的升级版,微博最初教程创造者的原教程hadoop的几个入门级脚本,但是版本太老。已近三年,hadoop有很多细节更改,命令更改,此文做出更正,已在本机尝试运行全部成功,若你的电脑实验失败,请指教,欢迎联系。原教程连接已经放在最后。

1. 默认配置:

  • Ubuntu 18.04
  • java-1.8.0_162
  • hadoop 3.2.1,下载在/usr/local/hadoop中
    注:照这个网站配置hadoop,顺便最后把路径设置了

http://dblab.xmu.edu.cn/blog/install-hadoop/?tdsourcetag=s_pctim_aiomsg

2.实验1:java脚本打印已经在hdfs上的某文件

  1. 开启hadoop
    进入文件夹:
    (看到不建议使用start-all.sh,有点浪费资源?)
../../usr/local/hadoop/
./sbin/start-dfs.sh
  1. 创建txt文件与上传到hadoop
    创建文件夹-进入文件夹 -创建并编辑文件
    (没vim的sudo apt-get install vim)
    :vim进入后按i或者a进入编辑模式,按esc进入命令行模式,输入”:w”保存,”:wq”保存加退出
 mkdir myclass
 mkdir input
 cd input/
 vim quangle.txt

写入小诗

On the top of the Crumpetty
Tree The Quangle Wangle sat,
But his face you could not see,
On account of his Beaver Hat.

  1. 在分布式系统中创建文件夹并检查文件夹创建情况
hadoop fs -mkdir /class4
hadoop fs -ls /(注意/前空格)
  1. 用hadoop指令复制本地文件到hadoop平台指定位置并检查hdfs中文件是否上传
hadoop fs -copyFromLocal quangle.txt /class4/quangle.txt
hadoop fs -ls /class4
  1. 打印云端文档脚步的书写,编译与运行
    alt+ctrl+t重新打开一个终端,(地点为~/)
    打开路径记录系统文件
vim ./bashrc

把系统的path改为如下,java的就别管了能用java就无视java路径,classpath 尤其是最后一个都不能少!!!其中的hadoop,java版本看自己,我是3.2.1,比如你是2.7.7,就改成2.7.7。远古版本hadoop构架可能不一样,建议自己到path说的路径去验证是否有path描述的jar或者path对应的是否是常用路径

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_162
export JRE_HOME=${
    
    JAVA_HOME}/jre
export CLASSPATH=.:${
    
    JAVA_HOME}/lib:${
    
    JRE_HOME}/lib
export PATH=${
    
    JAVA_HOME}/bin:$PATH
export PATH=$PATH:/usr/local/hadoop/sbin:/usr/local/hadoop/bin
export HADOOP_HOME=/usr/local/hadoop
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$CLASSPATH
export CLASSPATH=.:$HADOOP_HOME/share/hadoop/common/hadoop-common-3.2.1.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.1.jar:$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar:$CLASSPATH

  1. 在之前的终端(地点为/usr/local/hadoop/myclass)写入处理的java代码
vim FileSystemCat.java

编辑java文件

import java.io.InputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.IOUtils;

public class FileSystemCat {
    
    
    public static void main(String[] args) throws Exception {
    
    
        String uri = args[0];
        Configuration conf = new Configuration();
        FileSystem fs = FileSystem. get(URI.create (uri), conf);
        InputStream in = null;
    try {
    
    
            in = fs.open( new Path(uri));
            IOUtils.copyBytes(in, System.out, 4096, false);
        } finally {
    
    
            IOUtils.closeStream(in);
        }
    }
}

ESC+:wq保存退出
用javac编译java程序,注意前期一定要设计好路径即一堆path,这里就可以啥都不加,自动找合适路径找包编译,原教程加一堆,很多路径都变了,新的都在我叫加的path里

javac FileSystemCat.java

查看编译结果,应该有个FileSystemCat.class

ll
  1. 运行,hadoop最近版本运行此脚本命令应该如下
hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.1.jar FileSystemCat /class4/quangle.txt

格式为 hadoop+jar+使用的jar位置+运行的class名+参数(此处为hadoop平台存储文本路径)

另外,如果出错显示没有filesystemcat这个类,可以尝试进入 /usr/local/hadoop/etc/hadoop 更改hadoop-env.sh 在 末尾加上一行

export HADOOP_CLASSPATH=/usr/local/hadoop/myclass

8.打印出了文本中的小诗,完成

3.实验2:本地文本上传到hdfs的脚本

开局我们还是在myclass文件夹下创建编辑java文件 LocalFile2Hdfs.java

vim LocalFile2Hdfs.java
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;

public class LocalFile2Hdfs {
    
    
    public static void main(String[] args) throws Exception {
    
    
        String local = args[0];
        String uri = args[1];
        FileInputStream in = null;
        OutputStream out = null;
        Configuration conf = new Configuration();
        try {
    
    
            in = new FileInputStream(new File(local));
            FileSystem fs = FileSystem.get(URI.create(uri), conf);
            out = fs.create(new Path(uri), new Progressable() {
    
    
                @Override
                public void progress() {
    
    
                    System.out.println("*");
                }
            });
            in.skip(100);
            byte[] buffer = new byte[20];
            int bytesRead = in.read(buffer);
            if (bytesRead >= 0) {
    
    
                out.write(buffer, 0, bytesRead);
            }
        } finally {
    
    
            IOUtils.closeStream(in);
            IOUtils.closeStream(out);
        }        
    }
}

编译代码

javac LocalFile2Hdfs.java

建立测试的被上传文件

cd /usr/local/hadoop/input
			vim hdfs2local.txt

The San Francisco-based firm was unsatisfied with the Justice Department’s move in January to allow technological firms to disclose the number of national security-related requests they receive in broad ranges.
“It’s our belief that we are entitled under the First Amendment to respond to our users’ concerns and to the statements of U.S. government officials by providing information about the scope of U.S. government surveillance – including what types of legal process have not been received,” Lee wrote. “We should be free to do this in a meaningful way, rather than in broad, inexact ranges.”

将本地的文件上传到hadoop平台

hadoop fs -copyFromLocal hdfs2local.txt /class4/hdfs2local.txt

查看是否上传成功

hadoop fs -ls /class4/

运行程序(一行)

hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.1.jar LocalFile2Hdfs ../input/hdfs2local.txt /class4/hdfs2local.txt

done
当然 我们可以用第一个脚本看看内容
或者直接调用cat命令

hadoop fs -cat /class4/local2hdfs.txt

4.实验3:hdfs文件下载本地的脚本

开局还是在myclass下
编辑java文件

vim Hdfs2LocalFile.java
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;

public class Hdfs2LocalFile {
    
    
    public static void main(String[] args) throws Exception {
    
    

        String uri = args[0];
        String local = args[1];

        FSDataInputStream in = null;
        OutputStream out = null;
        Configuration conf = new Configuration();
        try {
    
    
            FileSystem fs = FileSystem.get(URI.create(uri), conf);
            in = fs.open(new Path(uri));
            out = new FileOutputStream(local);

            byte[] buffer = new byte[20];
            in.skip(100);
            int bytesRead = in.read(buffer);
            if (bytesRead >= 0) {
    
    
                out.write(buffer, 0, bytesRead);
            }
        } finally {
    
    
            IOUtils.closeStream(in);
            IOUtils.closeStream(out);
        }    
    }
}

编译程序

javac Hdfs2LocalFile.java

运行脚本

 hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.1.jar Hdfs2LocalFile /class4/local2hdfs.txt ./hdfs2local.txt

结果查看

ls

上传的,原本在input文件夹的local2hdfs.txt 变为了现在maclass文件夹下的hdfs2local_part.txt文件

mission done!(我爱你)
引用:http://dblab.xmu.edu.cn/blog/install-hadoop/?tdsourcetag=s_pctim_aiomsg
http://blog.sina.com.cn/s/blog_14865e03b0102w3is.html

猜你喜欢

转载自blog.csdn.net/qq_42511414/article/details/104889302