Linux下面Maven、Kafka、HBASE、flume、Hive安装

一、安装Maven

1.下载并解压wget http://archive.apache.org/dist/maven/maven-3/3.5.2/binaries/apache-maven-3.5.2-bin.tar.gz 

tar -xvzf apache-maven-3.5.2-bin.tar.gz

2.添加环境变量

vim ~/.bashrc

export MAVEN_HOME=/usr/local/src/apache-maven-3.5.2
export PATH=$PATH:$MAVEN_HOME/bin

保存退出,source ~/.bashrc

3.验证:输入mvn -v

二、kafka安装

1.下载并解压wget http://mirrors.hust.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

 tar -xvzf kafka_2.11-1.0.0.tgz

2.新建logs目录 mkdir logs

3.修改配置

cd config

vim server.properties

修改如下信息

broker.id=0(第二台机器就是1,第三台是2)

advertised.listeners=PLAINTEXT://master:9092

log.dirs=/usr/local/src/kafka_2.11-1.0.0/logs

zookeeper.connect=master:2181,slave1:2181,slave2:2181

分发到其他两台机器上面

scp -rp kafka_2.11-1.0.0 slave1:/usr/local/src

改一下brokerid

4.添加环境变量

vim ~/.bashrc

export KAFKA_HOME=/usr/local/src/kafka_2.11-1.0.0
export PATH=$PATH:$KAFKA_HOME/bin

source ~/.bashrc

把环境变量分发到灵台两台机器scp -rp ~/.bashrc slave1:~

5.验证

到bin文件夹下,输入kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties

输入jps

关闭kafka 使用bin/kafka-server-stop.sh
或者kill -9 端口号

6.简单验证kafka是否搭建成功

先把zookeeper启动了 ./zkServer.sh start

测试kafka是否能够正常收发消息,开两个终端,一个用来做producer发消息一个用来做consumer收消息,首先,先创建一个topic

./kafka-topics.sh --create --zookeeper master:2181 --replication-factor 1 --partitions 1 --topic testTopic

./kafka-topics.sh --describe --zookeeper master:2181 --topic testTopic

然后在第一个终端中输入命令:

./kafka-console-producer.sh –broker-list master:9092 –topic testTopic

在第二个终端中输入命令:

./kafka-console-consumer.sh –zookeeper 127.0.0.1:2181 –topic testTopic

如果启动都正常,那么这两个终端将进入阻塞监听状态,在第一个终端中输入任何消息第二个终端都将会接收到。

三、HBASE安装

1.下载和解压

wget http://archive.apache.org/dist/hbase/1.2.6/hbase-1.2.6-bin.tar.gz

tar -xvzf hbase-1.2.6-bin.tar.gz

2.配置conf

修改文件夹 下hbase-env.sh的java环境

vim hbase-env.sh

修改如下:

export JAVA_HOME=/usr/local/src/jdk1.7.0_80/
export HBASE_CLASSPATH=/usr/local/src/hbase-1.2.6/conf

export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=256m -XX:MaxPermSize=1024m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=256m -XX:MaxPermSize=1024m"
export HBASE_PID_DIR=/opt/data/hbase
export HBASE_MANAGES_ZK=false 

这里要说明一下HBASE_MANAGES_ZK这个参数,因为采用了自己的zookeeper,所以这里设置为false,否则hbase会自己启动一个zookeeper

3.修改hbse-site.xml

vim hbase-site.xml

<configuration>
    <property>
         <name>hbase.rootdir</name>
          <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
         <name>hbase.zookeeper.property.clientPort</name>
          <value>2181</value>
           <description>Property from ZooKeeper'sconfig zoo.cfg. The port at which the clients will connect</description>
     </property>

    <property>
        <name>zookeeper.session.timeout</name>
        <value>120000</value>
   </property>
     <property>
             <name>hbase.master</name>
              <value>master:60000</value>
     </property>
     <property>
           <name>hbase.master.port</name>
           <value>60000</value>
      </property>
      <property>
           <name>hbase.cluster.distributed</name>
           <value>true</value>
      </property>
      <property>
            <name>hbase.zookeeper.quorum</name>
            <value>master,slave1,slave2</value>
       </property>
       <property>
             <name>zookeeper.znode.parent</name>
              <value>/hbase</value>
       </property>
       <property>
              <name>hbase.tmp.dir</name>
              <value>/root/hbase/tmp</value>
        </property>
        <property>
              <name>hbase.master.info.bindAddress</name>
              <value>master</value>
      </property>
 </configuration>

修改vim regionservers

输入

master
slave1
slave2

4.修改环境变量

vim ~/.bashrc

export HBASE_HOME=/usr/local/src/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin

5.测试

 ./start-hbase.sh

输入 hbase shell

四、flume安装

1.下载和解压

wget http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz

tar -xvzf apache-flume-1.6.0-bin.tar.gz

2.配置环境变量

添加

export FLUME_HOME=/home/hadoop/app/cdh/flume-1.6.0-cdh5.7.1

export PATH=$PATH:$FLUME_HOME/bin

然后分发到另外两个机器,source一下

3.配置参数

到conf文件夹下

cp flume-env.sh.template flume-env.sh

vim flume-env.sh

添加

export JAVA_HOME=/usr/local/src/jdk1.7.0_80
export HADOOP_HOME=/usr/local/src/hadoop-2.6.1

4.验证

输入flume-ng version

如果遇到报错 Could not find or load main class org.apache.flume.tools.GetJavaProperty一般来说是由于装了HBASE等工具的原因

将hbase的hbase.env.sh的一行配置注释掉
# Extra Java CLASSPATH elements. Optional.
#export HBASE_CLASSPATH=/home/hadoop/hbase/conf
或者将HBASE_CLASSPATH改为JAVA_CLASSPATH,配置如下
# Extra Java CLASSPATH elements. Optional.
export JAVA_CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

现在正常了

五、Hive安装

1.下载和解压

wget http://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz

2.配置环境变量

vim ~/.bashrc
export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin
source .bashrc

3.MySQL安装

依次输入

yum install -y mysql-server

service mysqld start

chkconfig mysqld on

yum install -y mysql-connector-java

将mysql connector拷贝到hive的lib包中

 cp /usr/share/java/mysql-connector-java-5.1.17.jar /usr/local/src/apache-hive-1.2.1-bin/lib

更新MySQL密码

update user set password=password("123456")where user='root';

flush privileges;

exit;

输入mysql -uroot -p测试一下

4.启动Hadoop

到Hadoop的sbin文件夹下,./start-all.sh

然后创建文件夹并给予权限

hdfs dfs -mkdir -p /usr/hive/warehouse
hdfs dfs -mkdir -p /usr/hive/tmp
hdfs dfs -mkdir -p /usr/hive/log
hdfs dfs -chmod g+w /usr/hive/warehouse
hdfs dfs -chmod g+w /usr/hive/tmp
hdfs dfs -chmod g+w /usr/hive/log
 

 

4.配置参数

mkdir /opt/hive/tmp/root

配置conf文件夹下面的hive-site.xml

cp hive-default.xml.template hive-site.xml

把的${system:java.io.tmpdir}替换为hive的临时目录

把${system:user.name}都替换为root

下面只列举了一部分


  <property>
    <name>hive.exec.scratchdir</name>
    <value>/usr/hive/tmp</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
  </property>
<property>
    <name>hive.downloaded.resources.dir</name>
    <value>/opt/hive/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote fil
e system.</description>
  </property>
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/opt/hive/tmp/root/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging
functionality is enabled</description>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/usr/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>
<property>
    <name>hive.querylog.location</name>
    <value>/usr/hive/log</value>
    <description>Location of Hive run time structured log file</description>
  </property>
 <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.116.10:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
  </property>
</configuration>

配置conf文件夹下面的hive-env.sh

cp -r hive-env.sh.template hive-env.sh

添加

export JAVA_HOME=/usr/local/src/jdk1.7.0_80
export HADOOP_HOME=/usr/local/src/hadoop-2.6.1
export HIVE_HOME=/usr/local/src/apache-hive-1.2.1-bin
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HIVE_AUX_JARS_PATH==/usr/local/src/apache-hive-1.2.1-bin/lib/*

然后输入MySQL

在mysql上创建hive元数据库,并对hive进行授权
create database if not exists hive_metadata;
grant all privileges on hive_metadata.* to 'hive'@'%' identified by 'hive';
grant all privileges on hive_metadata.* to 'hive'@'localhost' identified by 'hive';
grant all privileges on hive_metadata.* to 'hive'@'spark1' identified by 'hive';
flush privileges;
use hive_metadata;

5.验证

输入hive

如果遇到报错

[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
        at jline.TerminalFactory.create(TerminalFactory.java:101)
        at jline.TerminalFactory.get(TerminalFactory.java:158)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)

将当前hive版本的$HIVE_HOME/lib目录下的jline-2.12.jar包拷贝到$HADOOP_HOME/share/hadoop/yarn/lib目录下, 并将旧版本的Hive的Jline包从$HADOOP_HOME/etc/hadoop/yarn/lib目录下删除

cp jline-2.12.jar /usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib

到对应目录删除 rm -f jline-0.9.94.jar

6.使用hive来实现wordcount

建表:create table wordcount(context string);

导数:load data local inpath '/usr/local/src/hadoop-2.6.1/data/wc.input' into table wordcount;

看看有没有导入成功:

实现: select word ,count(1) from wordcount lateral view explode(split(context,' ')) wc as word group by word;

这里的函数lateral view explode()表示把每行数据按指定分隔符拆解

猜你喜欢

转载自blog.csdn.net/lbship/article/details/82896891