linux jdk ssh

下载download jdk   from oracle.com

 

默认是在Downloads目录,用cp命令拷到/usr/lib/jvm/jdk目录,新建目录用mkdir  删除用rm -r 

 

1) sudo chmod  u+x jdk-***-linux.i586.bin

 

2)sudo -s ./jdk-***-linux.i586.bin

 

3)sudo gedit  /etc/profile

 

 

#set Java Enviroment
export JAVA_HOME=/usr/lib/jvm/jdk/jdk1.6.0_43
export JRE_HOME=/usr/lib/jvm/jdk/jdk1.6.0_43/jre
export ClASSPATH=".:$JAVA_HOME/lib:$CLASSPATH"
export PATH="$JAVA_HOME/bin:$JRE_HOME/bin:$PATH"

 

 # set hadoop path
export HADOOP_HOME=/usr/local/hadoop-1.1.2
export PATH="$HADOOP_HOME/bin:$PATH"

 

SSH:

 

以ubuntu为例

 

apt-get install ssh

 

ls -a /~

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa     //注意是两个单引号

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

 

注意权限,.ssh权限700,authorized_keys权限600

chmod 700 ~/.ssh/
chmod 600 ~/.ssh/authorized_keys

 

将master上的authorized_keys拷贝到slave1上,只需要master能免登录访问slave1,反之的不需要

scp authorized_keys slave1:~/.ssh/

 

 

在hadoop目录(我放在/usr/local/)下:

 

conf/core-site.xml:

 

<configuration>
 
    <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:9000</value>
    </property>

</configuration>

 

hdfs-site.xml

<configuration>

	<property>
	 <name>dfs.replication</name>
	 <value>1</value>
	</property>

</configuration>

 

mapred-site.xml

<configuration>

	<property>
	 <name>mapred.job.tracker</name>
	 <value>hdfs://localhost:9001</value>
	</property>

</configuration>

 

 

格式化HDFS

出现Exception in thread "main" java.lang.NoClassDefFoundError: NameNode 是要小写
hadoop namenode -format

 

启动

bin/start-all.sh

停止

bin/stop-all.sh

 

多台集群里注意如下:

参照《Hadoop实战》p33

1.可能需要手动将hadoop的safemode关掉 参见 http://blog.chinaunix.net/uid-233938-id-3124458.html 

bin/hadoop dfsadmin -safemode leave

2.将master的对端口9000-9001开放,或者直接将防火墙关掉   参见http://yeelor.iteye.com/blog/1928286

 

猜你喜欢

转载自yeelor.iteye.com/blog/1839917