SBT编译环境安装

最初是参考林子雨的文章,http://dblab.xmu.edu.cn/blog/1307-2/ ,但发现sbt-0.13的版本已经不被支持了。
无奈参考这篇文章, https://www.cnblogs.com/hank-yan/p/8686281.html ,安装1.1.1版本。

安装相对容易,spark针对hbase的操作,需要注意如下事项。
1.每一个项目都会需要一个配置文件,即xxx.sbt 。我建立的内容如下:

[root@k8s-1 user01]# cat /usr/local/sbt/test/simple.sbt 
name := "Simple Project"
version := "1.0" 
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
	"org.apache.spark" %% "spark-core" % "2.4.3",
	"org.apache.spark" %% "spark-sql" % "2.4.3",
	"org.apache.spark" %% "spark-hive" % "2.4.3",
	"org.apache.spark" %% "spark-streaming" % "2.4.3",
	"org.apache.hbase" % "hbase-client" % "2.1.0",
	"org.apache.hbase" % "hbase-common" % "2.1.0",
	"org.apache.hbase" % "hbase-server" % "2.1.0",
	"org.apache.hbase" % "hbase-protocol" % "2.1.0",
	"org.apache.hbase" % "hbase-mapreduce" % "2.1.0"
)

2.配置spark-env.sh文件

[root@k8s-1 user01]# cat /opt/spark-2.4.3-bin-hadoop2.7/conf/spark-env.sh|grep -v "#"
export SPARK_MASTER_IP=k8s-1
export SPARK_WORKER_MEMORY=8g
export JAVA_HOME=/usr/java/jdk1.8.0_171
export SCALA_HOME=/opt/scala-2.11.8
export HADOOP_HOME=/opt/hadoop-3.1.1
export HADOOP_CONF_DIR=/opt/hadoop-3.1.1/etc/hadoop
export SPARK_HOME=/opt/spark-2.4.3-bin-hadoop2.7
export HIVE_HOME=/opt/apache-hive-2.1.1-bin
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/spark-2.4.3-bin-hadoop2.7/jars/mysql-connector-java-5.1.33-bin.jar:/opt/hbase-2.1.0/lib/*:/opt/spark-2.4.3-bin-hadoop2.7/jars/*:/opt/hbase-2.1.0/conf
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=k8s-1:2181,k8s-2:2181,k8s-3:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_DIST_CLASSPATH=$(/opt/hadoop-3.1.1/bin/hadoop classpath)

3.配置hadoop-env.sh文件

[root@k8s-1 user01]# cat /opt/hadoop-3.1.1/etc/hadoop/hadoop-env.sh | grep -v "#" | grep -v "^$"
export JAVA_HOME=/usr/java/jdk1.8.0_171
export HADOOP_HOME=/opt/hadoop-3.1.1
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
  Darwin*)
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
  ;;
esac
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hbase-2.1.0/lib/*
export HADOOP_PID_DIR=/home/hadoop/tmp

4.配置/etc/bashrc

[root@k8s-1 user01]# cat /etc/bashrc
export JAVA_HOME=/usr/java/jdk1.8.0_171
export JRE_HOME=/usr/java/jdk1.8.0_171/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export HADOOP_HOME=/opt/hadoop-3.1.1
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export CLASSPATH=.:$CLASSPATH:$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HBASE_HOME=/opt/hbase-2.1.0
export HBASE_CONF_DIR=/opt/hbase-2.1.0/conf
export PATH=$PATH:/opt/hbase-2.1.0/bin

export SCALA_HOME=/opt/scala-2.11.8
export PATH=/opt/scala-2.11.8/bin:$PATH
export SPARK_HOME=/opt/spark-2.4.3-bin-hadoop2.7
export PATH="$SPARK_HOME/bin:$PATH"

export HIVE_HOME=/opt//apache-hive-3.1.1-bin
export HIVE_LIB=$HIVE_HOME/lib

export SBT_HOME=/usr/local/sbt
export PATH=$SBT_HOME:$HIVE_HOME:$PATH
export CLASSPATH=.:$CLASSPATH:$HBASE_HOME/lib

利用spark导入hbase数据,针对hbase2.1和scala2.11的写法也改变了。这给我带来了不少麻烦。
新版参考:https://www.cnblogs.com/swordfall/p/10517177.html
旧版参考:https://www.2cto.com/net/201801/712752.html

我这里写一下实测可以使用的scala代码,针对 hadoop3.1.1 hbase2.1.0 spark2.4.3 scala2.11.12

import org.apache.hadoop.hbase.{HConstants, HBaseConfiguration}
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.mapred.JobConf
import org.apache.spark.{SparkConf, SparkContext}
 
object SparkToHBase {
  def main(args: Array[String]) {
    if (args.length < 1) {
      System.err.println("Usage: SparkToHBase <input file>")
      System.exit(1)
    }
 
    val conf = new SparkConf().setAppName("SparkToHBase")
    val sc = new SparkContext(conf)
 
    val input = sc.textFile(args(0))
 
    val hConf = HBaseConfiguration.create()
    hConf.set(HConstants.ZOOKEEPER_QUORUM, "k8s-1:2181")
 
    val jobConf = new JobConf(hConf, this.getClass)
    jobConf.setOutputFormat(classOf[TableOutputFormat])
    jobConf.set(TableOutputFormat.OUTPUT_TABLE, "test")
 
    val data = input.map { item =>
      val Array(key, value) = item.split(",")
      val rowKey = key.reverse
      val put = new Put(Bytes.toBytes(rowKey))
      put.addColumn(Bytes.toBytes("prop"), Bytes.toBytes("score"), Bytes.toBytes(value))
      (new ImmutableBytesWritable, put)
    }
    data.saveAsHadoopDataset(jobConf)
    sc.stop()
  }
}

接下来编译和测试代码

[user01@k8s-1 test]$ cd /usr/local/sbt/test
[user01@k8s-1 test]$ cat /usr/local/sbt/sbt
SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled"
java $SBT_OPTS -jar /usr/local/sbt/sbtlaunch/sbt-launch.jar "$@"
[user01@k8s-1 test]$ find
.
./src
./src/main
./src/main/scala
./src/main/scala/SparkWriteHBase.scala
./src/test
./src/test/scala
[user01@k8s-1 test]$ /usr/local/sbt/sbt package
[info] Loading project definition from /usr/local/sbt/test/project
[info] Loading settings from simple.sbt ...
[info] Set current project to Simple Project (in build file:/usr/local/sbt/test/)
[success] Total time: 3 s, completed Jun 20, 2019 4:54:28 PM
[user01@k8s-1 test]$ /opt/spark-2.4.3-bin-hadoop2.7/bin/spark-submit --class "SparkToHBase" --driver-memory 2g ./target/scala-2.11/simple-project_2.11-1.0.jar   /input/test.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-2.4.3-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-06-20 16:54:42,011 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-06-20 16:54:42,279 INFO spark.SparkContext: Running Spark version 2.4.3
2019-06-20 16:54:42,294 WARN spark.SparkConf: Note that spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone/kubernetes and LOCAL_DIRS in YARN).
输出内容太多,此处省略

猜你喜欢

转载自blog.csdn.net/nickyu888/article/details/91984697