Spark's first introduction to core (1)
Tabs (space separated): The parts of Spark
- 1: Introduction to spark
- Two: spark installation and configuration
- Three: spark wordcount
- Four: spark processing data
- Five: Spark Application
- Six: spark log cleaning
- Seven: Review
1: Introduction to spark
1.1 The source of spark
Spark是UC Berkeley AMP lab (加州大学伯克利分校的AMP实验室)所开源的类Hadoop MapReduce的通用并行框架,Spark,拥有Hadoop MapReduce所具有的优点;但不同于MapReduce的是Job中间输出结果可以保存在内存中,从而不再需要读写HDFS,因此Spark能更好地适用于数据挖掘与机器学习等需要迭代的MapReduce的算法。
Spark 是一种与 Hadoop 相似的开源集群计算环境,但是两者之间还存在一些不同之处,这些有用的不同之处使 Spark 在某些工作负载方面表现得更加优越,换句话说,Spark 启用了内存分布数据集,除了能够提供交互式查询外,它还可以优化迭代工作负载。
Spark 是在 Scala 语言中实现的,它将 Scala 用作其应用程序框架。与 Hadoop 不同,Spark 和 Scala 能够紧密集成,其中的 Scala 可以像操作本地集合对象一样轻松地操作分布式数据集。
尽管创建 Spark 是为了支持分布式数据集上的迭代作业,但是实际上它是对 Hadoop 的补充,可以在 Hadoop 文件系统中并行运行。通过名为 Mesos 的第三方集群框架可以支持此行为。Spark 由加州大学伯克利分校 AMP 实验室 (Algorithms, Machines, and People Lab) 开发,可用来构建大型的、低延迟的数据分析应用程序。
1.2 The ecological environment of spark
![image_1b40erb9d10q31t0qqdmsqs1lksm.png-75.8kB][1]
1.3 Comparison of mapreduce between spark and hadoop
MapReduce
Hive Storm Mahout Griph
Spark Core
Spark SQL Spark Streaming Spark ML Spark GraphX Spark R
1.4 Where can spark run
Spark Application运行everywhere
local、yarn、memsos、standalon、ec2 .....
![image_1b40f3h4j1c0m2au1qmlrk61a4s13.png-145.4kB][2]
Two spark installation and configuration
2.1 Configure the environment of hadoop and install scala-2.10.4.tgz
tar -zxvf scala-2.10.4.tgz /opt/modules
vim /etc/profile
export JAVA_HOME=/opt/modules/jdk1.7.0_67
export HADOOP_HOME=/opt/modules/hadoop-2.5.0-cdh5.3.6
export SCALA_HOME=/opt/modules/scala-2.10.4
export SPARK_HOME=/opt/modules/spark-1.6.1-bin-2.5.0-cdh5.3.6
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin
2.2 Install spark-1.6.1-bin-2.5.0-cdh5.3.6.tgz
tar -zxvf spark-1.6.1-bin-2.5.0-cdh5.3.6.tgz
mv spark-1.6.1-bin-2.5.0-cdh5.3.6 /opt/modules
cd /opt/modules/spark-1.6.1-bin-2.5.0-cdh5.3.6/conf
cp -p spark-env.sh.template spark-env.sh
cp -p log4j.properties.template log4j.properties
vim spark-env.sh
增加:
JAVA_HOME=/opt/modules/jdk1.7.0_67
SCALA_HOME=/opt/modules/scala-2.10.4
HADOOP_CONF_DIR=/opt/modules/hadoop-2.5.0-cdh5.3.6/etc/hadoop
![image_1b40o15ugt8v1stklft1ahfm289.png-115.2kB][3]
2.3 spark command execution and invocation
执行spark 命令
bin/spark-shell
![image_1b40oa3e217t01nuoqlp1tc01o69m.png-406.3kB][4]
2.4 Run the test file:
hdfs dfs -mkdir /input
hdfs dfs -put READ.md /input
Execution Statistics
scala> val rdd = sc.textFile("/input/README.md")
![image_1b40qa6ll9uojo45leq41ctb2a.png-232.9kB][5]
rdd.count (统计多少行)
rdd.first (统计第一行)
rdd.filter(line => line.contains("Spark")).count (统计存在Spark的字符的有多少行)
![image_1b40qb9vd2151l8o8kd189l4ll2n.png-458.2kB][6]
![image_1b40qbsttjgi1c4ng2st31b34.png-118kB][7]
scala> rdd.map(line => line.split(" ").size).reduce(_ + _)
![image_1b40qqkcf88v1rpvlks86q1kbv3h.png-240.3kB][8]
Three: spark wordcount statistics
3.1 wc statistics of spark
val rdd=sc.textFile("/input") ####rdd 读文件
rdd.collect ###rdd 显示文件的内容
rdd.count ####rdd 显示有多少行数据
![image_1b40roaqd1llq196lj4p1r8mfnk9.png-223.2kB][9]
![image_1b40rp3pi6ck12pi10k516bh1u3lm.png-908.7kB][10]
3.2 Three steps of spark processing data
input
scala> val rdd =sc.textFile("/input") ####(输入数据)
process
val WordCountRDD = rdd.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(( a , b ) => ( a + b )) ######(处理数据)
简写:
val WordCountRDD = rdd.flatMap(_.split(" ")).map(_,1)).reduceByKey(_ + _)
output
scala> WordCountRDD.saveAsTextFile("/output3")
scala> WordCountRDD.collect
![image_1b40sv5d9e1615l01jkv1chf1qbn13.png-223.5kB][11]
![image_1b40t01h51m2sb0l1qkb7285rn1g.png-77.2kB][12]
![image_1b40t1e44122419bra65141cj4d2a.png-800.8kB][13]
![image_1b40t3hfg1iln174g1nd411fo17c92n.png-133.7kB][14]
![image_1b40t3vonno21ipr1lfap7319j334.png-78.8kB][15]
![image_1b40t8vos2md1f7717l7k18136l3h.png-827.2kB][16]
Fourth, spark processing data:
4.1 Data statistics of spark
spark 处理pageview 数据:
hdfs dfs -mkdir /page
hdfs dfs -put page_views.data /page
读取数据:
val rdd = sc.textFile("/page")
处理数据:
val PageRdd = rdd.map(line => line.split("\t")).map(arr => (arr(2), 1)).reduceByKey(_ + _)
取数据的前十条数据:
PageRdd.take(10);
![image_1btqin3s91cjefam40b1tsp114k13.png-223.3kB][17]
![image_1btqinr0eda5207qta1ga01rcb1g.png-405.2kB][18]
![image_1btqiof408m11m02pjavr713f1t.png-264.1kB][19]
![image_1btqiosfacmk38b1alm8ea1ru22a.png-264kB][20]
将数据放入内存:
rdd.cache
rdd.count
rdd.map(line => line.split("\t")).map(arr => (arr(2), 1)).reduceByKey(_ + _).take(10)
![image_1btqj33pv1aji1cc66joo3617c934.png-110.3kB][22]
![image_1btqj7ki01r5t1q1d16751426j143h.png-622.3kB][23]
![image_1btqj89941qv9n4v11po1ifo11h3u.png-221.1kB][24]
Five: Spark Application
5.1 The running mode of spark
spark 的application
-1. Yarn 目前最多
-2. standalone
自身分布式资源管理管理和任务调度
-3 Mesos
hadoop 2.x release 2.2.0 2013/10/15
hadoop 2.0.x - al
cloudera 2.1.x -bete
cdh3.x - 0.20.2
cdh4.x - 2.0.0
hdfs -> HA: QJM : Federation
Cloudera Manager 4.x
cdh5.x
5.2 spark 的 Standalone mode
Spark 本身知道的一个分布式资源管理系列以及任务调度框架
类似于 Yarn 这样的框架
分布式
主节点
Master - ResourceManager
从节点:
work -> nodemanager
打开 spark-env.sh
最后增加:
SPARK_MASTER_IP=192.168.3.1
SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=8080
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_PORT=7078
SPARK_WORKER_WEBUI_PORT=8081
SPARK_WORKER_INSTANCES=1 ## 每台机器可以运行几个work
cd /soft/spark/conf
cp -p slaves.template slaves
echo "flyfish01.yangyang.com" > slaves
------
启动spark
cd /soft/spark/sbin
start-slaves.sh
启动所有的从节点,也就是work节点
注意: 使用此命名,运行此命令机器,必须要配置与主节点的无密钥登录,否则启动时时候会出现一些问题,比如说输入密码之类的。
./start-master.sh
./start-slaves.sh
![image_1btqlhj441a31q1i1ear3b91b0t4b.png-354.5kB][25]
![image_1btqlkhop7eplabtmt1nmq1h6c4o.png-156.3kB][26]
![image_1btqlldjn115mb6rj4t1pec1o9855.png-226.3kB][27]
job 运行在standalone 上面
bin/spark-shell --master spark://192.168.3.1:7077
![image_1btqlud421ntv1e7i9vobri16fq5v.png-402.7kB][28]
![image_1btqlutdp1q15ki7130dv35lu6c.png-151.6kB][29]
- 5.3 Running on standalone
读取数据: val rdd = sc.textFile("/page")
Process data:
val PageRdd = rdd.map(line => line.split("\t")).map(arr => (arr(2), 1)).reduceByKey( + )
Get the first ten pieces of data:
PageRdd.take(10);
![image_1btqm9vgb2u6hr01rjs1eqn1kbc6p.png-222.4kB][30]
![image_1btqmb3hua17tdlopvhf21rj97m.png-95.8kB][31]
![image_1btqmbec81m7a10estl31bu3fmq83.png-227.9kB][32]
![image_1btqmbpv3goc6m912mg135t14uk8g.png-199.3kB][33]
![image_1btqmcmid1o4f1tln146nii171l8t.png-233.6kB][34]
![image_1btqmdimmi7hr9okh0rhqjs49a.png-243.3kB][35]
![image_1btqme14ko7p1or7gas1gk5isf9n.png-222.3kB][36]
![image_1btqnijbp19i8lfu15728pi1t3hbe.png-159.3kB][37]
### 5.4 对于一个spark application 两个部分组成
-
1. Driver program -> 4040 4041 4042
main method
SparkContext -- the most important -
2. Executor resource
A jvm (process)
task that runs our jobREPL: shell interactive command
spark Application
job -01
count
job -02
stage-01
task-01 (thread) -> map task (process)
task-02 (thread) -> map task (process)
All tasks in each stage, the business is The same, the processed data is different
stage -02job -03
From the above running program:
if the function called by RDD, when the return value is not RDD, it will trigger a job to execute
Think:
What exactly does reduceByKey do:
-1. Grouping Merge the value of
the same key -2. Merge
the value by reduce
After analysis, comparing the operation of the worldcount program in mapreduce, it is inferred that the division of stages in the spark job is divided according to whether shuffle occurs between RDDs.
![image_1btqmk5k0161l1qk1udq1c6t1l70a4.png-237.5kB][38]
![image_1btqn68elc5c4921dfve5ll25ah.png-213.1kB][39]
倒序查询:
val rdd = sc.textFile("/input")
val WordContRdd = rdd.flatMap(.split(" ")).map((,1)).reduceByKey( + )
val sortRdd = WordContRdd.map(tuple => (tuple._2, tuple._1)).sortByKey(false)
sortRdd.collect
sortRdd.take(3)
sortRdd.take(3).map(tuple => (tuple._2, tuple._1))
![image_1btqp481vu2g1mur1gd1td9g2hbr.png-247.2kB][40]
![image_1btqp5g401hg679vcdg524g9d8.png-98.8kB][41]
![image_1btqp6g6l1ui91pm77mb1v5n1rs7dl.png-559.1kB][42]
![image_1btqp72rh15ik3e27ln11mojthe2.png-286.8kB][43]
![image_1btqp7o251g05ksijvt1qh11v4sef.png-426.9kB][44]
![image_1btqp8bk61vbh74enlu1cbu652es.png-351.2kB][45]
![image_1btqp8vm6638545det164rrbf9.png-212.9kB][46]
Implicit conversions in scala:
Implicit conversions:
Convert one type to another.
Implicit function
implicit def
### 5.4 在企业中开发spark的任务
How to develop a spark application
spark-shell + idea
-1, write code in idea
-2, execute code in spark-shell
-3. Use IDEA to package the code into a jar package, and use bin/spark-submint to submit and run
### 5.5 spark 在Linux下面的idea 编程
10万条数据取前10条
package com.ibeifeng.bigdata.senior.core
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
/**
- Created by root on 17-11-2.
- Driver Program
-
*/
object SparkApp {def main(args: Array[String]) {
// step0: sSparkContext
val sparkConf = new SparkConf()
.setAppName("SparkApplication")
.setMaster("local[2]")// create SparkContext
val sc = new SparkContext (sparkConf)
//*=========================================/
//step 1: input data
val rdd = sc.textFile("/page/page_views.data")
//step 2: process data
val pageWordRddTop10 = rdd
.map(line => line.split("\t"))
.map(x => (x(2),1))
.reduceByKey( + )
.map(tuple => (tuple. _2, tuple._1))
.sortByKey(false)
.take(10)//Step 3 : output data
pageWordRddTop10.foreach(println(_))
//*=========================================/
//close spark
sc.stop()
}
}
![image_1btt7cn2ov6v1oqj1i3h12s51jlp9.png-176kB][47]
- 5.6 将代码打包成一个jar包运行
-
![image_1btt7p7e0t2hok4135n1oha1k619.png-262.8kB][48]
![image_1btt7rpep1niaa17rin116rn73m.png-304.6kB][49]
![image_1btt7tvej12u9kuofa5i5nmbs13.png-342kB][50]
![image_1btt82ao11a0saie1thh108ve8d1g.png-406.7kB][51]
![image_1btt84pt74j514mr991ng41h3m1t.png-354kB][52]
![image_1btt85jdd1hb91nmass8m19dr72a.png-360.8kB][53]
![image_1btt894q2kn71k0t1dth1kts1mtu2n.png-540.4kB][54]
![image_1btt8b81a1n336so19vhnci15at34.png-271.4kB][55]
![image_1btt8c7n2vgmrik10m82avu741.png-170.1kB][56]
![image_1btt8dd7ov6g36k17bepp8ue4u.png-171.5kB][57]
![image_1btt8ef5jg5i10ao1dcr1l4surc5b.png-109.4kB][58]
- 5.7 spark 提交任务:
运行在local
bin / spark-submint Scala_Project.jar
![image_1btt92m7bu6c1hs916k611n6iu55o.png-271kB][59]
![image_1btt93bqe1kvg1dnpddq9231tch65.png-320.8kB][60]
运行在standalone
![image_1btt998qio8vngo6882qj15kd6i.png-537.9kB][61]
![image_1btt9c12sgevk9d11v312uue9k6v.png-254.9kB][62]
![image_1btt9ck8mg0ergftghvsk1c407c.png-106.4kB][63]
Start spark standalone
bin/start-master.sh
bin/start-slave2.sh
![image_1btt9fo8f10jc1v7j135okol15ts7p.png-197.8kB][64]
![image_1btt9iaav1t8rftsl5413je197ra6.png-312.2kB][65]
bin/spark-submit --master spark://192.168.3.1:7077 Scala_Project.jar
![image_1btt9mub7cgg2et1e5r1irt5nmaj.png-554.6kB][66]
![image_1btt9nkq75uh15m014rqaii2htb0.png-226.9kB][67]
![image_1btt9o8ms1d7tmii1crlcov18lhbd.png-358.8kB][68]
- 5.7 spark 的historyserver配置
spark monitors the completed spark application
Divided into two parts:
First: Set the sparkApplication to record log information when it is running
Second: Start historyserver to view through the interface
------
Configure historyserver
cd /soft/spark/conf
cp -p spark-defaults.conf.template spark-defaults.conf
vim defaults.conf
spark.master spark://192.168.3.1:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs: //192.168.3.1: 8020 / SparkJobLogs
spark.eventLog.compress true
启动spark-shell
bin/spark-shell
![image_1bttakmgv17b0ig01tqb7o3qaibq.png-397kB][69]
![image_1bttamgcdposngoo8suvb16thd7.png-396.3kB][70]
![image_1bttaofol1mi71b1qnr7lt416lmdk.png-150.2kB][71]
bin/spark-submit --master spark://192.168.3.1:7077 Scala_Project.jar
![image_1bttbbcutkflom1irq1pcm1d5e1.png-264.7kB][72]
![image_1bttbc1s110191g8h17u1ji21bdqee.png-80.5kB][73]
![image_1bttbco9e1o5v60hep3d91v73er.png-263.1kB][74]
![image_1bttbdbapgiq8i51af6d5f1avff8.png-305.7kB][75]
![image_1bttbec3t5cn1eac1co51gsd324fl.png-227.2kB][76]
Configure the server-side historyserver of spark
vim spark-env.sh
SPARK_MASTER_IP=192.168.3.1
SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=8080
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_PORT=7078
SPARK_WORKER_WEBUI_PORT=8081
SPARK_WORKER_INSTANCES=1 ## Each machine can run several jobs
#增加
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://flyfish01.yangyang.com:8020/SparkJobLogs -Dspark.history.fs.cleaner.enabled=true"
-------------
# start historyserver
cd /soft/spark
sbin/start-history-server.sh
![image_1bttc313v58p9so168t2e83gggi.png-103.6kB][77]
![image_1bttcac0b1jn51to41eogiq31n20gv.png-208.5kB][78]
![image_1bttcckkcafj1dhl1l2fj406b9hc.png-373.2kB][79]
![image_1bttcehdv1dea14qrp8t127v1ueghp.png-367.4kB][80]
---
###六: spark 的日志分析
需求一:
The average, min, and max content size of responses returned from the server.
ContentSize
Requirement 2:
A count of response code's returned.
responseCode
需求三:
All IPAddresses that have accessed this server more than N times.
ipAddresses
需求四:
The top endpoints requested by count.
endPoint
### 6.1 maven 创建工程:
#### 6.1.1 使用命令行创建
mvn archetype:generate -DarchetypeGroupId=org.scala-tools.archetypes -DarchetypeArtifactId=scala-archetype-simple -DremoteRepositories=http://scala-tools.org/repo-releases -DgroupId=com.ibeifeng.bigdata.spark.app -DartifactId=log-analyzer -Dversion=1.0
#### 6.1.2 导入工程
![image_1bttptu0g1mip1906188j1mtn1pavme.png-67.3kB][81]
![image_1bttpumtj6151ofu15e8igs1a9smr.png-151.8kB][82]
![image_1bttpveo11qng67s1p7a19eo1m1nn8.png-81kB][83]
![image_1bttq017jfv6ne83c44516mtnl.png-174kB][84]
![image_1bttq0hb162d145a1sh8qraphbo2.png-75.3kB][85]
![image_1bttq12k21mom9um1nhpov01begof.png-251.4kB][86]
![image_1bttq22lq9pd9pgb71jo91cldos.png-73.5kB][87]
![image_1bttq6ti46o2qcj1n851ecuu90p9.png-195.6kB][88]
#### 6.1.3 pom.xml 文件:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">;
<modelVersion>4.0.0</modelVersion>
<groupId>com.ibeifeng.bigdata.spark.app</groupId>
<artifactId>log-analyzer</artifactId>
<version>1.0</version>
<name>${project.artifactId}</name>
<description>My wonderfull scala app</description>
<inceptionYear>2010</inceptionYear>
<properties>
<encoding>UTF-8</encoding>
<hadoop.version>2.5.0</hadoop.version>
<spark.version>1.6.1</spark.version>
</properties>
<dependencies>