采用随机森林计算参数权重(包含完整代码与完整数据格式)

前段时间在做一些气象预测方面的工作,牵扯到大量的复杂的数据分析与预处理。

该篇文章简述我在用随机森林进行数据分析,计算各类天气参数对于目标参数的贡献度,也就是参数权重大小。

首先引入各个计算工具包

from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
import sys
import csv

接下来对处理好的天气参数csv文件的读取,即x的读取,我所用到的X是 100维的

#读取x值即10x10的风数据值,并将数据转化为numpy的array格式并输入模型
path = sys.path[0] + '/'

file = path + 'weither/data/z100_getdata.csv'

csvfile = open(file, encoding='utf-8')
csvreader = csv.reader(csvfile)

xdata = []
xname = []
timex = []
for x in csvreader:
    if x[1] == '925':
        timex.append(x[0][:8])
        x = [float(x) for x in x[2:]]
        xdata.append(x)

print(timex[:10])#该包含time的部分可以省略,是为若有缺失值,填充做的准备
for m in range(100):
    xname .append(str('x'+str(m)))
xdata.pop(-1)
xdata = np.array(xdata)
xname = np.array(xname)

#print(xdata[:5],len(xdata),type(xdata),xname[:5])

df = pd.DataFrame(xdata, columns=xname)

df['is_train'] = np.random.uniform(0, 1, len(df)) <= .75#将数据集分为训练集与测试集

接下来读取y值并进行预处理。y值是一维数据,数量要求与x的长度完全一样,yname参数保存数据的分类名

#读取y值即PM2.5的数据值,并将数据转化为numpy的array格式并输入模型
file = path + 'weither/data/sites_2016_getdata.csv'

csvfile = open(file, encoding='utf-8')
csvreader = csv.reader(csvfile)
ydataf = []
yname = []
ydata = []
timey = []
for x in csvreader:
    if x[0][0] == '2':
        timey.append(x[0])
        x = [int(x) for x in x[1:]]
        ydataf.extend(x)

print(timey[:10])
cha = set(timex) - set(timey)
print(cha)
i = 0
while i <= len(ydataf)-6:
    c = int(sum(ydataf[i:i+6])/240)
    if c > 12:
        ydata.append(13)
    else:
        ydata.append(c)
    i = i + 6
#print(ydata[:5],len(ydata),type(ydata))
for m in range(len(set(ydata))):
    yname .append(str('y'+str(m*40)))
ydata.pop(0)

np.array(ydata)
np.array(yname)

#print(ydata[:5],set(ydata),yname)

接下来是做随机森林,将参数导入,直接调用sklearn里面的RandomForestClassifier即可

df['species'] = pd.Categorical.from_codes(ydata, yname)
#print(df['species'])
df.head()
train, test = df[df['is_train']==True], df[df['is_train']==False]
features = df.columns[:100]
clf = RandomForestClassifier(n_jobs=2)

y, _ = pd.factorize(train['species'])
clf.fit(train[features], y)
preds = np.array(yname)[clf.predict(test[features])]
#print(preds)
pd.crosstab(test['species'], preds, rownames=['actual'], colnames=['preds'])

mmm = clf.feature_importances_
nnn = list(mmm)

nnn = [x*100 for x in nnn]
nnn = [float('%.5f' % x) for x in nnn]
print(nnn)

接下来是对数据的说明:

这是sklearn里面的举例数据集:一条xdata数据有四个值,对应一个y,格式是'numpy.ndarray'格式,但后来我替换成列表格式,不进行numpy转换,似乎也没有出现问题。所用到的数据有xdata\ xname\ ydata\ yname

{'xdata': array([[5.1, 3.5, 1.4, 0.2],
       [4.9, 3. , 1.4, 0.2],
       [4.7, 3.2, 1.3, 0.2],
       [4.6, 3.1, 1.5, 0.2],
       [5. , 3.6, 1.4, 0.2],
       [5.4, 3.9, 1.7, 0.4],
       [4.6, 3.4, 1.4, 0.3],
       [6. , 3. , 4.8, 1.8],
       [6.9, 3.1, 5.4, 2.1],
       [6.7, 3.1, 5.6, 2.4],
        ......
       [6.9, 3.1, 5.1, 2.3],
       [5.8, 2.7, 5.1, 1.9],
       [6.8, 3.2, 5.9, 2.3],
       [6.7, 3.3, 5.7, 2.5],
       [6.7, 3. , 5.2, 2.3],
       [6.3, 2.5, 5. , 1.9],
       [6.5, 3. , 5.2, 2. ],
       [6.2, 3.4, 5.4, 2.3],
       [5.9, 3. , 5.1, 1.8]]),
 'ydata': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0,......, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1,......, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'yname': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'), 
 'xname': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
 'DESCR': 'Iris Plants Database\n====================\n\nNotes\n-----\nData Set Characteristics:\n    :Number of Instances: 150 (50 in each of three classes)\n    :Number of Attributes: 4 numeric, predictive attributes and the class\n    :Attribute Information:\n        - sepal length in cm\n        - sepal width in cm\n        - petal length in cm\n        - petal width in cm\n        - class:\n                - Iris-Setosa\n                - Iris-Versicolour\n                - Iris-Virginica\n    :Summary Statistics:\n\n    ============== ==== ==== ======= ===== ====================\n                    Min  Max   Mean    SD   Class Correlation\n    ============== ==== ==== ======= ===== ====================\n    sepal length:   4.3  7.9   5.84   0.83    0.7826\n    sepal width:    2.0  4.4   3.05   0.43   -0.4194\n    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)\n    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)\n    ============== ==== ==== ======= ===== ====================\n\n    :Missing Attribute Values: None\n    :Class Distribution: 33.3% for each of 3 classes.\n    :Creator: R.A. Fisher\n    :Donor: Michael Marshall (MARSHALL%[email protected])\n    :Date: July, 1988\n\nThis is a copy of UCI ML iris datasets.\nhttp://archive.ics.uci.edu/ml/datasets/Iris\n\nThe famous Iris database, first used by Sir R.A Fisher\n\nThis is perhaps the best known database to be found in the\npattern recognition literature.  Fisher\'s paper is a classic in the field and\nis referenced frequently to this day.  (See Duda & Hart, for example.)  The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant.  One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\nReferences\n----------\n   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"\n     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n     Mathematical Statistics" (John Wiley, NY, 1950).\n   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.\n   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n     Structure and Classification Rule for Recognition in Partially Exposed\n     Environments".  IEEE Transactions on Pattern Analysis and Machine\n     Intelligence, Vol. PAMI-2, No. 1, 67-71.\n   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions\n     on Information Theory, May 1972, 431-433.\n   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II\n     conceptual clustering system finds 3 classes in the data.\n   - Many, many more ...\n' }
'''

猜你喜欢

转载自blog.csdn.net/u011537121/article/details/81543083
今日推荐