Hadoop installation + HBase pseudo-distributed installation

In the first part, the general configuration of installing Hadoop

is to add a few lines of configuration at the end of the file, and there are also file paths that need to be created manually.

1 Modify the hostname
[root@xinyanfei conf]# hostname
xinyanfei

2 Configure the host

3 Ping the hostname
[root@xinyanfei conf]# ping xinyanfei
PING xinyanfei (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0 .0.1): icmp_seq=1 ttl=64 time=0.036 ms

4 Download Hadoop, and extract
[root@xinyanfei ~]# cd /export/servers/hadoop-1.0.4/conf/

5 View the modified configuration core-site. xml
[root@xinyanfei conf]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl" ?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License"
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
                <name>fs.default.name</name>
                <value>hdfs://xinyanfei:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/export/Data/hadoop/tmp</value>
        </property>
</configuration>

6 查看修改的配置 hadoop-env.sh
[root@xinyanfei conf]# cat hadoop-env.sh
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HADOOP_NICENESS=10

export JAVA_HOME=/export/servers/jdk1.7.0_80/

7 查看修改的配置 hdfs-site.xml

[root@xinyanfei conf]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
    <property> 
        <name>dfs.permissions</name> 
        <value>false</value> 
    </property>
</configuration>

8 查看修改的配置 mapred-site.xml

[root@xinyanfei conf]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property> 
        <name>mapred.job.tracker</name> 
        <value>xinyanfei:9001</value> 
    </property>
</configuration>

9 查看修改的配置 masters 和 slaves文件
[root@xinyanfei conf]# cat masters
localhost
[root@xinyanfei conf]# cat slaves
localhost



10 hadoop namenode -format
[root@localhost bin]# hadoop namenode -format


Warning: $HADOOP_HOME is deprecated.

16/11/17 13:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
16/11/17 13:44:59 INFO util.GSet: VM type       = 64-bit
16/11/17 13:44:59 INFO util.GSet: 2% max memory = 17.78 MB
16/11/17 13:44:59 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/11/17 13:44:59 INFO util.GSet: recommended=2097152, actual=2097152
16/11/17 13:45:00 INFO namenode.FSNamesystem: fsOwner=root
16/11/17 13:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
16/11/17 13:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
16/11/17 13:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
16/11/17 13:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
16/11/17 13:45:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/17 13:45:00 INFO common.Storage: Image file of size 110 saved in 0 seconds.
16/11/17 13:45:00 INFO common.Storage: Storage directory /export/Data/hadoop/tmp/dfs/name has been successfully formatted.
16/11/17 13:45:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
**************************************************** **********/
[root@localhost bin]#

==============================


Part II , install Hbase, and configure



1. Modify the configuration files hbase-env.sh and hbase-site.xml in the conf directory under hbase-

0.94.18. Modify hbase-env.sh as follows:

export JAVA_HOME=/usr/Java/jdk1 .6

export HBASE_CLASSPATH=/usr/hadoop/conf

export HBASE_MANAGES_ZK=true

#Hbase log directory
export HBASE_LOG_DIR=/root/hadoop/hbase-0.94.6.1/logs

hbase-site.xml is modified as follows:

<configuration>
<property>
<name >hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

<property>
<name>dfs.replication</name>
< value>1</value>
</property>
<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
        </property>
</configuration>

After completing the above operations, Hbase can be started normally. Startup sequence: first Start Hadoop -> then start Hbase, shutdown sequence: first close Hbase -> then close Hadoop.

Start hbase:

zcf@zcf-K42JZ:/usr/local/hbase$ bin/start-hbase.sh

View process jps:

4798 SecondaryNameNode
16790 Jps
4275 NameNode
5154 TaskTracker
16269 HQuorumPeer
4908 JobTracker
16610 HRegionServer
5305
4549 DataNode
16348 HMaster

enter shell mode : bin/hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.18, r1577788, Sat Mar 15 04:46:47 UTC 2014


hbase(main):001:0>

Stop hbase first, then stop hadoop.

We can also manage and view the HBase database through the WEB page.

HMaster: http://192.168.0.10:60010/master.jsp



Note: The default hbase.master port of Hbase is 60000

<property>
<name>hbase.master</name>
<value>192.168.0.10:60000</value >
</property>
If the master port is modified in the configuration file, when using the java
api, specify the xml file configuration.addResource(new FileInputStream(new File("hbase-site.xml")));, otherwise Will report: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Call to master1/172.22.2.170:60000 error

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326484826&siteId=291194637