hadoop3节点集群安装,spark集群安装

一 : 修改机器名

1. 修改3台机器的机器名,注意名字不要带下划线

修改机器名命令:
hostnamectl set-hostname xxxx
然后退出shell重新登陆
修改3台机器的hosts文件
vim /etc/hosts
添加以下内容
192.107.53.157  hadoop-master
192.107.53.158  hadoop-slave1
192.107.53.159  hadoop-slave2
 

二:主从节点免密码登陆

1.  免密钥登陆本机

1. 关闭防火墙
查看防火墙状态
service iptables status
关闭防火墙
service iptables stop 
chkconfig iptables off
2. 免密码登录本机
1)生产秘钥
ssh-keygen -t rsa
2)将公钥追加到”authorized_keys”文件
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3)赋予权限
chmod 600 .ssh/authorized_keys
4)验证本机能无密码访问
ssh hadoop-master

依次配置hadoop-slave1, hadoop-slave2 免密钥登陆本机

2. hadoop-master本机无密码登录hadoop-slave1、hadoop-slave2,以hadoop-master无密码登录hadoop-slave1为例进行讲解:(主节点免密码登陆从节点)

 
1)登录hadoop-slave1 ,复制hadoop-master服务器的公钥”id_rsa.pub”到hadoop-slave1服务器的”root”目录下。
scp [email protected]:/root/.ssh/id_rsa.pub /root/ 或者
scp root@hadoop-master:/root/.ssh/id_rsa.pub /root/
2)将hadoop-master的公钥(id_rsa.pub)追加到hadoop-slave1的authorized_keys中
cat id_rsa.pub >> .ssh/authorized_keys
rm -rf  id_rsa.pub
3)在 hadoop-master上面测试
ssh  hadoop-slave1
登录hadoop-slave2 ,执行上面同样的操作

3. 配置hadoop-slave1,hadoop-slave2本机无密码登录hadoop-master(从节点免密码登陆主节点)

1)登录hadoop-master,复制hadoop-slave1服务器的公钥”id_rsa.pub”到hadoop-master服务器的”/root/”目录下。

scp root@hadoop-slave1:/root/.ssh/id_rsa.pub /root/
2)将hadoop-slave1的公钥(id_rsa.pub)追加到hadoop-master的authorized_keys中。

cat id_rsa.pub >> .ssh/authorized_keys
rm -rf  id_rsa.pub //删除id_rsa.pub
3)在 hadoop-slave1上面测试

ssh  hadoop-master

重复上述步骤,使得hadoop-slave2本机也可以无密码登陆主节点

至此,主从节点的免密码登陆完成

 三:hadoop安装

1. hadoop-master的安装和配置

1) 安装jdk

#下载  
jdk-8u171-linux-x64.tar.gz
#解压  
tar -xzvf  
jdk-8u171-linux-x64.tar.gz  -C /usr/local 
#重命名   
mv  jdk-8u171-linux-x64  java

2) 安装hadoop

#下载  
hadoop-3.1.0.tar.gz
#解压  
tar -xzvf  hadoop-3.1.0.tar.gz   -C /usr/local 
#重命名   
mv  hadoop-3.1.0.tar.gz  hadoop

3) 配置环境变量

vim /etc/profile
JAVA_HOME="/usr/local/java"
export PATH="$JAVA_HOME/bin:$PATH"
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
source /etc/profile

4) hadoop相关配置

cd  /usr/local/hadoop/etc/hadoop

a) 配置core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.107.53.157:9000</value>
    </property>
</configuration>

b) 配置hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/usr/local/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/usr/local/hadoop/hdfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>192.107.53.157:9000</value>
    </property>
</configuration>

c) 配置 mapred-site.xml

<configuration>
  <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
  </property>
   <property>
      <name>mapred.job.tracker</name>
      <value>http://192.107.53.157:9001</value>
  </property>

  <property>
      <name>mapreduce.application.classpath</name>
      <value>
       /usr/local/hadoop/etc/hadoop,
       /usr/local/hadoop/share/hadoop/common/*,
       /usr/local/hadoop/share/hadoop/common/lib/*,
       /usr/local/hadoop/share/hadoop/hdfs/*,
       /usr/local/hadoop/share/hadoop/hdfs/lib/*,
       /usr/local/hadoop/share/hadoop/mapreduce/*,
       /usr/local/hadoop/share/hadoop/mapreduce/lib/*,
       /usr/local/hadoop/share/hadoop/yarn/*,
       /usr/local/hadoop/share/hadoop/yarn/lib/*
     </value>
  </property>
</configuration>

d) 配置yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-master</value>
    </property>
</configuration>

e) 配置workers文件

hadoop-slave1
hadoop-slave2

f) 配置hadoop-env.sh

export JAVA_HOME=/usr/local/java

2. hadoop-slave1的安装和配置(其他从节点操作一样)

 1)复制hadoop和java到hadoop-slave1节点

scp -r /usr/local/hadoop hadoop-slave1:/usr/local/

scp -r /usr/local/java hadoop-slave1:/usr/local/

2) 登录hadoop-slave1服务器,删除workers内容
 

rm -rf /usr/local/hadoop/etc/hadoop/workers

3) 配置环境变量

vim /etc/profile
JAVA_HOME="/usr/local/java"
export PATH="$JAVA_HOME/bin:$PATH"
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
source /etc/profile

四:启动hadoop集群

修改sh的用户,不然启动会报错

vim start-dfs.sh 以及 vim stop-dfs.sh 分别添加下面4行

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

vim start-yarn.sh 以及 vim stop-yarn.sh 分别添加下面4行

YARN_RESOURCEMANAGER_USER=root
YARN_NODEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn

1) 首次启动需要format namenode

hdfs namenode -format

2) 启动hadoop:
sbin/start-all.sh

3) 使用jps命令查看运行情况

#master 执行 jps查看运行情况
25928 SecondaryNameNode
25742 NameNode
26387 Jps

26078 ResourceManager

#slave 执行 jps查看运行情况
24002 NodeManager
23899 DataNode

24179 Jps

4) 跑计算圆周率的程序,说明hadoop可以正常运行

hadoop jar /usr/local/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar pi 5 10  

5) 登陆浏览器 http://192.107.53.157:8088/cluster/apps

五:问题处理

1. Hadoop集群配置之后浏览器无法访问问题

https://blog.csdn.net/csdn_chuxuezhe/article/details/73322068

修改主机名: 
vi /etc/sysconfig/network 
在下边修改: 
NETWORKING=yes 
HOSTNAME=hadoop-master

同时,修改hosts vi /etc/hosts

 
192.107.53.157  hadoop-master
192.107.53.158  hadoop-slave1
192.107.53.159  hadoop-slave2
重启!!!!!

参考文献

http://www.ityouknow.com/hadoop/2017/07/24/hadoop-cluster-setup.html

六: spark 集群安装

1. 以hadoop-master节点为例

1. 安装scala
2. 安装spark
3. 配置环境变量

#scala
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin

#spark
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin

4. spark配置
cp    spark-env.sh.template   
spark-env.sh
vim spark-env.sh

export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_HOME=/usr/local/spark
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=hadoop-master 
export SPARK_EXECUTOR_MEMORY=1G
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
JAVA_HOME:Java安装目录 
SCALA_HOME:Scala安装目录 
HADOOP_HOME:hadoop安装目录 
HADOOP_CONF_DIR:hadoop集群的配置文件的目录
SPARK_MASTER_IP:spark集群的Master节点的ip地址 
SPARK_WORKER_MEMORY:每个worker节点能够最大分配给exectors的内存大小
SPARK_WORKER_CORES:每个worker节点所占有的CPU核数目
SPARK_WORKER_INSTANCES:每台机器上开启的worker节点的数目
5. 编辑slaves
cp    slaves.template   slaves
vi slaves加入Worker节点如下配置
hadoop-slave1
hadoop-slave2

2. hadoop-slave1,hadoop-slave2两个节点将scala,spark包复制过去即可

 
1.登陆hadoop-svale1
 scp -r [email protected]:/usr/local/scala /usr/local

 scp -r [email protected]:/usr/local/spark /usr/local
2. 配置环境变量

#scala
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin

#spark
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
登陆hadoop-svale2执行一样的操作

3. 启动spark集群

1 cd /usr/local/spark/sbin
./start-all.sh

4. 运行样例

下面链接各种提交模式都有,可参考

http://zhenggm.iteye.com/blog/2358324

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client examples/jars/spark-examples_2.11-2.1.0.jar 

七 问题解决

问题1:Container xxx is running beyond physical memory limits

  • 日志:

    Container [pid=134663,containerID=container_1430287094897_0049_02_067966] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 1.5 GB of 10 GB virtual memory used. Killing container. Dump of the process-tree for
    问题分析:
    从日志可以看出,container使用内存超过虚拟内存的限制,导致如上问题。默认2.1;
    NodeManager端设置,类似系统层面的overcommit问题,需要调节yarn.nodemanager.vmem-pmem-ratio相关参数,在yarn-site.xml修改:
<property>
 <name>yarn.nodemanager.vmem-pmem-ratio</name>
     <value>10</value>
 </property>
 –或者yarn.nodemanager.vmem-check-enabled,false掉 
 <property>
     <name>yarn.nodemanager.vmem-check-enabled</name>
     <value>false</value>
 </property>

问题2:datanode进程挂掉

重启进程

sbin/hadoop-daemon.sh start datanode 

问题3:配置spark-history并启动进程

只有日志文件往往是不够的,有时候我们要查看历史记录,这就需要在driver节点启动History Server

在$SPARK_CONF_DIR下面的spark-defaults.conf文件中添加EventLog和History Server的配置

# EventLog
spark.eventLog.enabled true
spark.eventLog.dir file:///opt/spark/current/spark-events
# History Server
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory file:/opt/spark/current/spark-events
这里注意要创建/opt/spark/current/spark-events路径,application的执行历史才会保存到该路径。

执行启动命令

./sbin/start-history-server.sh

可参考文献

http://www.leonlu.cc/profession/14-spark-log-and-history/

问题4 日志配置

https://blog.csdn.net/stark_summer/article/details/46929481

spark的日志一方面打印到控制台,一方面写入到/home/hadoop/spark.log中了,这是日志的继承特性,后面再来改进,目前把log4j.rootCategory=INFO, console,FILE改为log4j.rootCategory=INFO, FILE即可

cd /usr/local/spark/conf

vim log4j.properties

log4j.rootCategory=INFO, console,FILE
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

log4j.appender.FILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.FILE.Threshold=DEBUG
log4j.appender.FILE.file=/usr/local/spark/logs/spark.log
log4j.appender.FILE.DatePattern='.'yyyy-MM-dd
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=[%-5p] [%d{yyyy-MM-dd HH:mm:ss}] [%C{1}:%M:%L] %m%n
# spark
log4j.logger.org.apache.spark=INFO

问题5 spark连接ssl的kafka

spark集群的3台机器分别都要生成

将这4个文件放在一个目录下,直接运行2_ServerGenKey.sh脚本即可

八: 配置定时执行脚本

vim /etc/crontab
*/1 * * * * root /bin/sh  /usr/local/jars/run.sh
若不想打印运行日志,可这样写  */1 * * * * root /bin/sh  /usr/local/jars/run.sh /dev/null 2>&1
查看日志路径 vim /var/log/cron

猜你喜欢

转载自blog.csdn.net/u013385018/article/details/80881552
今日推荐