安装apache的impala-kudu:话不多说,直入主题:
安装环境说明:
linux + Centos6.5系统
主节点:mrj001 192.168.137.6
从节点:mrj002 192.168.137.7
从节点:mrj003 192.168.137.8
第一步:下载安装包
http://archive.cloudera.com/beta/impala-kudu/redhat/6/x86_64/impala-kudu/0/RPMS/x86_64/
http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/5/RPMS/noarch/
第二步:安装rpm包
rpm -ivh bigtop-utils-0.7.0+cdh5.12.0+0-1.cdh5.12.0.p0.37.el6.noarch.rpm
rpm -ivh impala-kudu-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm –nodeps
rpm -ivh impala-kudu-catalog-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh impala-kudu-state-store-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh impala-kudu-server-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
yum install -y python-setuptools
rpm -ivh impala-kudu-shell-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh impala-kudu-udf-devel-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
主节点安装catalog和state-store
从节点不要安装catalog和state-store
第三步:修改配置文件
修改bigtop-utils :
vim /etc/default/bigtop-utils
export JAVA_HOME=/home/local/jdk1.8.0_131/
修改 impala :
vim /etc/default/impala
IMPALA_CATALOG_SERVICE_HOST=192.168.0.116
IMPALA_STATE_STORE_HOST=192.168.0.116
复制hdfs配置文件以及hive配置文件,如果有hbase配置文件,一起复制:
cd /etc/impala/conf
cp $HADOOP_HOME/core-site.xml ./
修改core-site.xml,添加下面配置
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.client.read.shortcircuit.skip.checksum</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
cp $HADOOP_HOME/hdfs-site.xml ./
修改hdfs-site.xml,添加下面配置
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>impala</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout.millis</name>
<value>60000</value>
</property>
cp $HADOOP_HOME/core-site.xml ./
修改hive-site.xml
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://10.61.0.106:3306/hive?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=UTF-8</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.137.6:9083</value>
</property>
第四步:将mysql-connector-*.jar复制到/etc/share/java文件夹下并改名
cp mysql-connector-*.jar /usr/share/java/mysql-connector-java.jar
第五步:启动impala服务
启动hadoop:
start-all.shell
启动hive服务:
hive --service metastore &
hive --service hiveserver &
启动impala服务:
service impala-state-store start
service impala-catalog start
service impala-server start
第六步:启动impala-shell
impala-shell
show databases;//执行操作验证
kudu-impala安装教程
猜你喜欢
转载自blog.csdn.net/m0_38003171/article/details/79789886
今日推荐
周排行