hadoop fully configured and easy to step on pit included

 Before configuring the first domain name with a good map Ha details refer to my other essays

 

After the download is good hdfs.tar.gz

In the / home / ldy

mkdir apps/   

tar -xzvf hdfs.tar.gz -C / home / ldy / apps / # hdfs and designed to install the jdk

 

Modify environment variables: vim / etc / profile

Add in the last fi above

export HDP_HOME = / home / ldy / apps / hadoop-2.8.5 / etc / hadoop # paths vary from person

export PATH=$PATH:$HDP_HOME/sbin : $HDP_HOME/bin

hadoop-daemon.sh sbin directory commands in the (old in bin)    is preferably equipped with both. jdk configuration is the same reason

 

Profiles:

In /home/ldy/apps/hadoop-2.8.5/etc/hadoop

vim hadoop-env.sh # tell it to java_home

 

Vim core-site.xml

 

 

Vim hdfs-site.xml

 

 

Note: core-site.xml configuration error can cause incorrect namenode addr

Value of the name tag can not be modified

After the domain name mapping can write the host name, and the core-site.xml the addresses of all servers must be consistent, be sure to use the same file system

Configure the look secondary namenodes :( this is optimized settings, will be equipped with better)

I suggest that you first with a good one server and then copy and paste files to another server, save trouble

Here remote connection requires open ssh and scp

Open ssh:

Run ps -e | grep ssh, sshd process to see if there

If not, explain server did not start, start the server process through /etc/init.d/ssh -start, if prompted ssh does not exist then it is not installed server

Install server

1.sudo apt-get update

2.sudo apt-get install openssh-server

 

apt-get过程中可能出现:

E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarly unavailable)

E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is an other process using it?

 

当出现这个报错时直接:

sudo rm /var/lib/dpkg/lock-frontend

sudo rm /var/lib/dpkg/lock

 

接下来有可能还会报下面的错:

E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)

Unable to lock directory /var/lib/apt/lists

 

sudo rm /var/lib/apt/lists/lock

apt-get update  即可

 

Scp :

scp -r  /home/ldy/apps/hadoop-2.8.5   ubuntu-01:/home/ldy/apps/

报错:   ssh连接The authenticity of host can't be established

修改/etc/ssh/ssh_config文件的配置

修改:(没有就在最后面添加)

StrictHostKeyChecking no

注:一般是禁止root用户登录的,切换到普通用户可正常使用

 当出现这个错误时:

Permisson denied ,please try again

 

当出现这个错误时,被拒绝,是因为ssh的权限问题,需要修改权限,进入到/etc/ssh文件夹下,用root用户修改文件sshd_config

将PermitRootLogin no 改为 PermitRootLogin yes

记得重启ssh:sudo service ssh restart

 

hadoop  namenode -format (一次就够了)

start-dfs.sh  (开启namenode和datanode服务)

使用这个命令每次都要输密码,这里可以设一个免密登录,在namenode服务器上设比较好

免密登录:

     ssh-keygen;(一直回车就行)

     ssh-copy-id  主机名;(有多少个主机执行多少次这个命令)

     vim etc/hadoop/slaves  (加上所有需要免密的主机名)

 

注意:虚拟机重启后得重新执行 start-dfs.sh,namenode和datanode才启动(其本质是软件)

 

 若有不对敬请指正........

 

Guess you like

Origin www.cnblogs.com/ldy233/p/11206622.html