Spark ml 之一 简单的文本多分类

主要参考:使用PySpark处理文本多分类问题

和 python机器学习的流程一样。ml提供了机器学习的各种接口。本文主要记录ml 里数据读取、预处理、模型使用的基础知识。

第一步 读取数据

from pyspark.sql import SQLContext
from pyspark import SparkContext
sc =SparkContext()
sqlContext = SQLContext(sc)
data = sqlContext.read.format('com.databricks.spark.csv').options(header='true',
inferschema='true').load('train.csv')
drop_list = ['Dates', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
data = data.select([column for column in data.columns if column not in drop_list])
A SQLContext can be used create :class:`DataFrame`. 
SparkContext在Spark应用程序的执行过程中起着主导作用,它负责与程序和spark集群进行交互,包括申请集群资源、创建RDD、accumulators及广播变量等。
通过read.format().options().load(文件名) 读入要处理的文件。设定drop_list,只保留需要的特征,通过data.select方法实现。

第二步 数据预处理+构造分类数据集

from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer
from pyspark.ml.classification import LogisticRegression
# regular expression tokenizer
regexTokenizer = RegexTokenizer(inputCol="Descript", outputCol="words", pattern="\\W")
# stop words
add_stopwords = ["http","https","amp","rt","t","c","the"]
stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(add_stopwords)
# bag of words count
countVectors = CountVectorizer(inputCol="filtered", outputCol="features",
vocabSize=10000, minDF=5)
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
label_stringIdx = StringIndexer(inputCol = "Category", outputCol = "label")
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors,
label_stringIdx])
# Fit the pipeline to training documents.
pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
dataset.show(5)
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
1. 分词、去停用词、将原来的文本转换为词袋向量。 使用词频作为特征。
2. label_stringIdx = StringIndexer(inputCol = "Category", outputCol = "label") 
   StringIndexer将一列字符串label编码为一列索引号(从0到label种类数-1),根据label出现的频率排序,最频繁出现的label的index为0。
在该例子中,label会被编码成从0到32的整数,最频繁的 label(LARCENY/THEFT) 会被编码成0。
3. pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors,
label_stringIdx]) pipeline 管道模型,对各个步骤统一处理。
4. (trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100) 使用randomSplit 将数据集划分为训练集和测试集。
5. ml 中,所有算法都是fit 和 transform 。

第三步 训练模型 + 评估模型

lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
lrModel = lr.fit(trainingData)
predictions = lrModel.transform(testData)
# predictions.filter(predictions['prediction'] == 0) \
#     .select("Descript","Category","probability","label","prediction") \
#     .orderBy("probability", ascending=False) \
#     .show(n = 10, truncate = 30)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
使用了逻辑回归分类器。lr.fit(训练集) lr.transform(测试集)

第四步 分类算法之交叉验证 + 网格搜索  这个和python机器学习一致。

pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors, label_stringIdx])
pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
             .addGrid(lr.regParam, [0.1, 0.3, 0.5]) # regularization parameter
             .addGrid(lr.elasticNetParam, [0.0, 0.1, 0.2]) 
                  # Elastic Net Parameter (Ridge = 0)
#            .addGrid(model.maxIter, [10, 20, 50]) #Number of iterations
#            .addGrid(idf.numFeatures, [10, 100, 1000]) # Number of features
             .build())
# Create 5-fold CrossValidator
cv = CrossValidator(estimator=lr, \
                    estimatorParamMaps=paramGrid, \
                    evaluator=evaluator, \
                    numFolds=5)
cvModel = cv.fit(trainingData)

predictions = cvModel.transform(testData)
# Evaluate best model
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)

猜你喜欢

转载自blog.csdn.net/qq_34333481/article/details/85599016