单节点数仓环境配置以及docker的基本使用

数据仓库环境部署

前言:Docker的作用图解

一、Docker安装

1.1 Centos Docker安装

# 镜像比较大, 需要准备一个网络稳定的环境
# 其中--mirror Aliyun代表使用阿里源
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

1.2 Ubuntu Docker安装【推荐】

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

1.3 MacOs Docker安装

# 下载安装包, 拖动安装即可
https://hub.docker.com/editions/community/docker-ce-desktop-mac/

1.4 Windows Docker安装【不推荐】

# win10家庭版 【参考】
https://docs.docker.com/docker-for-windows/install-windows-home/

# win10专业版、商业版或教育版 【参考】
https://docs.docker.com/docker-for-windows/install/

二、容器准备

2.1 设置权限

#查看docker用户
sudo docker ps
#添加普通yoghurt
sudo groupadd docker#可不管
sudo gpasswd -a $USER docker
#更新docker用户组
newgrp docker

2.2 拉取镜像

#拉取镜像
docker pull centos:7
#可以去hub.docker.com看有哪些docker版本
#删除镜像:docker rmi centos:7
#查看镜像:docker images

2.2.1 连接xshell

sudo apt-get update
sudo apt-get install -y openssh-server
systemctl start sshd

2.3 启动并创建容器

#创建容器
docker run -itd --privileged --name singlemaster -h singlemaster \
-p 2222:22 \
-p 3306:3306 \
-p 50070:50070 \
-p 8088:8088 \
-p 8080:8080 \
-p 10000:10000 \
-p 60010:60010 \
-p 9092:9092 \
centos:7 /usr/sbin/init

#注释
run:通过一个镜像运行一个容器;
-i:提供一个终端,一般和t或者d连用;t,前台运行,d后台运行;
--privileged:设置权限,如果不设置,在容器中启动服务会报错;
--name:给容器起名字,可以通过一个镜像启动多个容器
-h:容器的主机名
-p:端口映射,宿主机端口:容器端口
centos:7:镜像名称
/usr/sbin/init:和--privileged参数连用

2.4 容器常用操作

#关闭容器
docker stop singlemaster

#启动容器
docker start singlemaster

#查看容器
docker ps -a #-a:查看所有容器,包括没有运行的容器

#删除容器
docker rm singlemaster

#进入容器
docker exec -it singlemaster /bin/bash

#从Ubuntu拷贝文件到容器里
docker cp 原路径 容器名称:目标路径

三、环境准备

3.1 安装必要软件

yum clean all
yum -y install unzip bzip2-devel vim bashname

3.2 配置SSH免密登录

#安装必要的ssh服务
yum install -y openssh openssh-server openssh-clients openssl openssl-devel 
#生成秘钥
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' 
#配置免密
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 启动SSH服务
systemctl start sshd

3.3 xshell连接docker

#设置密码,输入两次确认密码,username自定义,可设置为root
passwd username
#连接xshell,端口号修改为2222(映射22),IP使用Ubuntu的IP,再输入上面的用户密码(username:password)

3.3 设置时区

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

3.4 关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

四、MySQL安装

#拖拽文件
yum -y install lrzsz
#从ubundtu复制文件到容器内指定文件夹内
docker cp filename container:/path

4.1 上传解压安装包

mkdir /opt/software /opt/download
cd /opt/software/
tar xvf MySQL-5.5.40-1.linux2.6.x86_64.rpm-bundle.tar

4.2 安装必要依赖

yum -y install libaio perl

4.3 安装服务端和客户端

rpm -ivh MySQL-server-5.5.40-1.linux2.6.x86_64.rpm
rpm -ivh MySQL-client-5.5.40-1.linux2.6.x86_64.rpm 

4.4 启动并配置MySQL

#启动服务
systemctl start mysql
#修改MySQL密码
/usr/bin/mysqladmin -u root password 'root'
#登陆MySQL设置权限
mysql -uroot -proot 
> update mysql.user set host='%' where host='localhost';
> delete from mysql.user where host<>'%' or user='';
> flush privileges;

4.5 方式2

#启动服务
systemctl start mysql
#执行MySQL的初始化
/usr/bin/mysql_secure_installation
#输入一次回车,两次相同的密码进行修改密码

五、安装JDK

5.1 上传并解压

tar -zxvf /opt/software/jdk-8u171-linux-x64.tar.gz -C /opt/install/
ln -s /opt/install/jdk1.8.0_171 /opt/install/java

5.2 配置环境变量

#容器中若配置在/etc/profile,则每次重启后环境变量会失效,docker需要source 
vi ~/.bashrc 
-------------------------------------------
export JAVA_HOME=/opt/install/java
export PATH=$JAVA_HOME/bin:$PATH
-------------------------------------------

5.3 查看版本

java -version

六、Hadoop安装

6.1 上传并解压

tar -zxvf /opt/software/hadoop-2.6.0-cdh5.14.2.tar_2.gz -C /opt/install/
ln -s /opt/install/hadoop-2.6.0-cdh5.14.2 /opt/install/hadoop

6.2 修改配置

# 进入路径
cd /opt/install/hadoop/etc/hadoop/

6.2.1 配置core-site.xml

vi core-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://singlemaster:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/install/hadoop/data/tmp</value>
  </property>
</configuration>
-------------------------------------------

6.2.2 配置hdfs-site.xml

vi hdfs-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
-------------------------------------------

6.2.3 配置mapred-site.xml

vi mapred-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>singlemaster:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>singlemaster:19888</value>
  </property>
</configuration>
-------------------------------------------

6.2.4 配置yarn-site.xml

vi yarn-site.xml
-------------------------------------------
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>singleNode</value>
	</property>
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>604800</value>
	</property>
</configuration>
-------------------------------------------

6.2.5 配置hadoop-env.sh

vi hadoop-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------

6.2.6 配置mapred-env.sh

vi mapred-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------

6.2.7 配置yarn-env.sh

vi yarn-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------

6.2.8 配置slaves

vi slaves
-------------------------------------------
singlemaster
-------------------------------------------

6.3 添加环境变量

export HADOOP_HOME=/opt/install/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

6.4 HDFS格式化

hdfs namenode -format

6.5 启动Hadoop服务

$HADOOP_HOME/sbin/start-all.sh
$HADOOP_HOME/sbin/stop-all.sh

6.6 Web端查看

查看50070端口

image-20210220114837694

查看8088端口

image-20210220114908549

七、Hive安装

7.1 上传并解压

tar -zxvf /opt/software/hive-1.1.0-cdh5.14.2.tar.gz -C /opt/install/
ln -s /opt/install/hive-1.1.0-cdh5.14.2 /opt/install/hive

7.2 修改配置

# 进入路径
cd /opt/install/hive/conf/

7.2.1 修改hive-site.xml

vi hive-site.xml
-------------------------------------------
<configuration>
	<property>
		<name>hive.metastore.warehouse.dir</name>
		<value>/home/hadoop/hive/warehouse</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:mysql://singlemaster:3306/hive?createDatabaseIfNotExist=true</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.mysql.jdbc.Driver</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>root</value>
	</property>
	<property>
		<name>hive.exec.scratchdir</name>
		<value>/home/hadoop/hive/data/hive-${user.name}</value>
		<description>Scratch space for Hive jobs</description>
	</property>

	<property>
		<name>hive.exec.local.scratchdir</name>
		<value>/home/hadoop/hive/data/${user.name}</value>
		<description>Local scratch space for Hive jobs</description>
	</property>
</configuration>
-------------------------------------------

7.2.2 修改hive-env.sh

vi hive-env.sh
-------------------------------------------
HADOOP_HOME=/opt/install/hadoop
-------------------------------------------

7.3 添加依赖包

cp /opt/software/mysql-connector-java-5.1.31.jar /opt/install/hive/lib/

7.4 添加环境变量

export HIVE_HOME=/opt/install/hive
export PATH=$HIVE_HOME/bin:$PATH

7.5 启动服务

nohup hive --service metastore>/dev/null 2>&1 &
nohup hive --service hiveserver2>/dev/null 2>&1 &

7.6 Jps查看

image-20210220114813881

八、Sqoop安装

8.1 上传并解压

tar -zxvf /opt/software/sqoop-1.4.6-cdh5.14.2.tar.gz -C /opt/install/
ln -s /opt/install/sqoop-1.4.6-cdh5.14.2 /opt/install/sqoop

8.2 修改sqoop-env.sh

cd /opt/install/sqoop/conf/
vi sqoop-env.sh
-------------------------------------------
#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/install/hadoop

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/install/hadoop

#Set the path to where bin/hive is available
export HIVE_HOME=/opt/install/hive
-------------------------------------------

8.3 添加依赖包

cp /opt/software/mysql-connector-java-5.1.31.jar /opt/install/sqoop/lib/
cp /opt/software/java-json.jar /opt/install/sqoop/lib/

8.4 添加环境变量

export SQOOP_HOME=/opt/install/sqoop
export PATH=$SQOOP_HOME/bin:$PATH

8.5 查看版本

sqoop version

Q&A

1、如果在docker中搭建集群怎么搭建,创建不同的docker容器吗?

最大的难度是体现在网络上[IP地址分配]

#方案一:
#创建一个network
docker network create hadoopcluster
#查看network
docker network ls
#在创建容器时指定network
docker run -it --network hadoopcluster --name hadoop1 -h hadoop1 #还有其他设置
docker run -it --network hadoopcluster --name hadoop1 -h hadoop2
docker run -it --network hadoopcluster --name hadoop1 -h hadoop3

#方案二:
使用另一个工具:docker-compose

猜你喜欢

转载自blog.csdn.net/xiaoxaoyu/article/details/114242493