hadoop1.1.2和hbase0.94.11配置

Hadoop 1.1.2版本配置

Hbase 0.94.11

测试时ip  172.19.32.128

  1. 更新apt

Sudo apt-get update

Sudo apt-get install vim

  1. 安装ssh,设置无密码登陆

Sudo apt-get install openssh-server

Ssh localhost

Exit

Cd  ~/.ssh/

Ssh-keygen –t rsa

Cat ./id_rsa.pub >> ./authorized_keys

再使用ssh localhost就不需要密码

  1. 安装java环境

Sudo apt-get install default-jre default-jdk

vim ~/.bashrc

在文件前面添加export JAVA_HOME=/usr/lib/jvm/default-java

Source ~./bashrc

echo $JAVA_HOME

java -version

$JAVA_HOME/bin/java -version

  1. 安装hadoop

Sudo tar –zxf ~/下载/hadoop-1.1.2.tar.gz –C /usr/local

Cd /usr/local

Sudo mv ./hadoop-1.1.2/ ./hadoop

Sudo chown –R hadoop ./hadoop

Cd /usr/local/hadoop

./bin/hadoop version

  1. 伪分布式配置

Cd /usr/local/hadoop/conf

Vim hadoop-env.sh

添加:export JAVA_HOME=/usr/lib/jvm/default-java

export PATH=$PATH:/usr/local/hadoop/bin

         source hadoop-env.sh

         hadoop version

         vim core-site.xml

                   <configuration>

<property>

    <name>fs.default.name</name>

     <value>hdfs://localhost:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

     <value>/usr/local/hadoop/tmp</value>

  </property>

</configuration>

         Vim hdfs-site.xml

                   <configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.name.dir</name>

<value>/usr/local/hadoop/hdfs/name</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/usr/local/hadoop/hdfs/data</value>

</property>

</configuration>

         ./bin/hadoop namenode –format

         ./bin/start-dfs.sh

         Jps

6.      创建hdfs用户目录

Cd /usr/local/hadoop

./bin/hadoop fs –mkdir –p /user/hadoop

        

./bin/hadoop fs –ls. (这个.是hdfs用户当前目录,等价于./bin/hadoop fs –ls /user/hadoop)

./bin/hdfs dfs –mkdir input

(在创建个input目录时,采用了相对路径形式,实际上,这个input目录创建成功以后,它在HDFS中的完整路径是“/user/hadoop/input”。

./bin/hdfs dfs –mkdir /input

是在HDFS的根目录下创建一个名称为input的目录

7.      Hdfs文件操作

本地文件系统上传到hdfs

./bin/hdfs dfs -put /home/hadoop/myLocalFile.txt  input

         ./bin/hdfs dfs –ls input 查看是否成功上传

./bin/hdfs dfs –cat input/myLocalFile.txt

Hdfs下载到本地文件系统

./bin/hdfs dfs -get input/myLocalFile.txt  /home/hadoop/下载

Hdfs的一个目录拷贝到hdfs的另一个目录

./bin/hdfs dfs -cp input/myLocalFile.txt  /input

8.  Hbase安装

sudo tar -zxf ~/下载/hbase-0.94.11-security.tar.gz -C /usr/local

sudo mv /usr/local/hbase-1.1.2-security /usr/local/hbase

vim ~/.bashrc

export PATH=$PATH:/usr/local/hbase/bin

source ~/.bashrc

cd /usr/local

sudo chown -R hadoop ./hbase

/usr/local/hbase/bin/hbase version

12.   Hbase伪分布式配置

Vim /usr/local/hbase/conf/hbase-env.sh

添加:export JAVA_HOME=/usr/lib/jvm/default-java

export HBASE_MANAGES_ZK=true

 

         vim /usr/local/hbase/conf/hbase-site.xml

         设置:

<configuration>

                 <property>

                <name>hbase.rootdir</name>

                <value>hdfs://localhost:9000/hbase</value>      

                 </property>

                    <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

                 </property>

                            <property>

 <name>hbase.zookeeper.quorum</name>

 <value>localhost</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

         ssh localhost

cd /usr/local/hadoop

./bin/start-dfs.sh

Jps

         cd /usr/local/hbase

bin/start-hbase.sh

jps

bin/hbase shell

exit

bin/stop-hbase.sh

13.   关闭hadoop:./sbin/stop-dfs.sh

14.   访问hbaseweb管理页面 http://172.19.32.128:60010/

访问hdfs web管理页面http://172.19.32.128:50070/

15.  解决Hbase Table already exists问题。

通过./hbase zkcli命令进入zookeeper client模式

输入ls /hbase/table命令看到zombie table

使用rmr /hbase/table/TABLE_NAME命令删除zombie table

重启Hbase

猜你喜欢

转载自www.cnblogs.com/dhName/p/10469628.html