数组聚合和分组运算

      对数据集进行分组并对各组应用一个函数(无论是聚合还是转换), 这是数据分析工作的重要环节。 在数据集准备好之后, 通常任务就是计算分组统计或者生产透视表。pandas提供了一个灵活的高效的groupby 功能, 它使你能以一种自然的方式对数据集进行切片, 切块,摘要等操作。

     goupby 技术: split - apply - combine (拆分- 应用- 合并)。例如DataFrame 结构在其行(axis=0)或列(axis=1)上进行分组。然后, 将一个函数应用(apply)到各个分组并产生一个新值, 最后这些函数的执行结果会被合并(combine)到最终的结果对象中。

分组聚合过程:

自定义一组dataframe结构的数据:

#coding=gbk
#数据聚合和分组运算
import pandas as pd
import numpy as np
np.random.seed(666)
data = pd.DataFrame({'key1':['a','a','b','b','a'],
                     'key2':['one','two','one','two','one'],
                     'data1':np.random.randn(5),
                     'data2':np.random.randn(5)
                     })
print(data)
#       data1     data2 key1 key2
# 0  0.824188 -0.109497    a  one
# 1  0.479966  0.019028    a  two
# 2  1.173468 -0.943761    b  one
# 3  0.909048  0.640573    b  two
# 4 -0.571721 -0.786443    a  one
group1 = data['data1'].groupby(data['key1']).mean() #依据key1 进行分组, 分为a, b 两组, 再进行聚合得到其平均值
print(group1)
# key1
# a    0.244144
# b    1.041258
# Name: data1, dtype: float64
#一次传入多个数组
g2 = data['data1'].groupby([data['key1'], data['key2']]).mean()
print(g2)
# key1  key2
# a     one     0.126233
#       two     0.479966
# b     one     1.173468
#       two     0.909048
# Name: data1, dtype: float64

通过两个键对数据进行分组, 得到对的series 具有一个层次化索引,(由唯一的键值对组成)

g = g2.unstack()
print(g)
# key2       one       two
# key1                    
# a     0.126233  0.479966
# b     1.173468  0.909048
print('----')
print(g.loc['a'])
# key2
# one    0.126233
# two    0.479966
# Name: a, dtype: float64

分组键可以是任何长度的适当的数组

states = np.array(['ohio','California','ohio','ohio','California'])
years = np.array([2005,2004,2004,2004,2005])
sy = data['data1'].groupby([states, years]).mean()
print(sy)
# California  2004    0.479966
#             2005   -0.571721
# ohio        2004    1.041258
#             2005    0.824188
# Name: data1, dtype: float64

将列名用作分组键

d1 = data.groupby('key1').mean()
print(d1)
#          data1     data2
# key1                    
# a     0.244144 -0.292304
# b     1.041258 -0.151594
d2 = data.groupby(['key1','key2']).mean()
print()
print(d2)
#               data1     data2
# key1 key2                    
# a    one   0.126233 -0.447970
#      two   0.479966  0.019028
# b    one   1.173468 -0.943761
#      two   0.909048  0.640573

groupby 的size 用法

size = data.groupby(['key1', 'key2']).size()
print(size)
# key1  key2
# a     one     2
#       two     1
# b     one     1
#       two     1
# dtype: int64

#对分组进行迭代
#groupby 对象支持迭代, 可以产生一组二元元组

print('对分组进行迭代')
for name, group in data.groupby('key1'):    #key1 的属性值有2个, a 和b 
    print(name)
    print(group)
# a
#       data1     data2 key1 key2
# 0  0.824188 -0.109497    a  one
# 1  0.479966  0.019028    a  two
# 4 -0.571721 -0.786443    a  one
# b
#       data1     data2 key1 key2
# 2  1.173468 -0.943761    b  one
# 3  0.909048  0.640573    b  two

将数据片段做成字典

print('-----')
pieces = dict(list(data.groupby('key1')))   #建立字典
print(pieces['b'])
#       data1     data2 key1 key2
# 2  1.173468 -0.943761    b  one
# 3  0.909048  0.640573    b  two
#groupby 默认是在axis=0 上进行分组的, 通过设置可以设置在其他的轴上进行分组。可以根据dtype 对列进行分组
print(data.dtypes)  
# data1    float64
# data2    float64
# key1      object
# key2      object
# dtype: object
groupd = data.groupby(data.dtypes, axis = 1)
print(dict(list(groupd)))   #依据dtypes进行分组
# {dtype('O'):   key1 key2
# 0    a  one
# 1    a  two
# 2    b  one
# 3    b  two
# 4    a  one, dtype('float64'):       data1     data2
# 0  0.824188 -0.109497
# 1  0.479966  0.019028
# 2  1.173468 -0.943761
# 3  0.909048  0.640573
# 4 -0.571721 -0.786443}

选取一个或一组列

对于由dataframe产生的groupby对象, 如果用一个(单个字符串)或一组(字符串数组)列名对其进行索引, 就能实现部分列进行聚合的目的。

#选取一个列或一个组
group1 = data.groupby('key1')[['data1']].mean()    #双层中括号
print(group1)
#          data1
# key1          
# a     0.244144
# b     1.041258
#如果只需要计算data2 列的平均值, 并以DataFrame形式得到结果
group_d = data.groupby(['key1','key2'])[['data2']].mean()
print(group_d)
#               data2        #由于得到的是datafram数据结构,可以通过索引机制获得值
# key1 key2          
# a    one  -0.447970
#      two   0.019028
# b    one  -0.943761
#      two   0.640573
print(group_d.ix['a'])
#          data2
# key2          
# one  -0.447970
# two   0.019028

通过字典或者Series进行分组

np.random.seed(666)
people = pd.DataFrame(np.random.randn(5,5),
                   columns = list('abcde'),
                   index = ['Joe','Steve','Wes', 'Jim', 'Travis']
                   )
people.ix[2:3, ['b', 'c']] = np.NaN #添加2个na值
print(people)
#                a         b         c         d         e
# Joe     0.824188  0.479966  1.173468  0.909048 -0.571721
# Steve  -0.109497  0.019028 -0.943761  0.640573 -0.786443
# Wes     0.608870       NaN       NaN -0.736918 -0.298733
# Jim    -0.460587 -1.088793 -0.575771 -1.682901  0.229185
# Travis -1.756625  0.844633  0.277220  0.852902  0.194600

mapping = {'a':'red', 'b':'red', 'c':'blue','d':'blue','e':'yellow','f':'orange'}  #f 中对应上述数据不存在
by_color = people.groupby(mapping, axis=1).sum()    #依据列进行索引
print(by_color)
#             blue       red    yellow
# Joe     2.082516  1.304154 -0.571721
# Steve  -0.303188 -0.090469 -0.786443
# Wes    -0.736918  0.608870 -0.298733
# Jim    -2.258672 -1.549380  0.229185
# Travis  1.130121 -0.911993  0.194600
#Series 也有同样的功能, 他可以被看做是一个固定大小的映射
map_series = pd.Series(mapping)
group_series = people.groupby(mapping, axis=1).count()
print(group_series)
#         blue  red  yellow
# Joe        2    2       1
# Steve      2    2       1
# Wes        1    1       1
# Jim        2    2       1
# Travis     2    2       1

通过函数进行分组

#根据人名的长度进行分组
name_len = people.groupby(len).sum()
print(name_len)
#           a         b         c         d         e
# 3  0.972471 -0.608827  0.597697 -1.510771 -0.641269
# 5 -0.109497  0.019028 -0.943761  0.640573 -0.786443
# 6 -1.756625  0.844633  0.277220  0.852902  0.194600

根据索引级别分组

#层次化索引数据集最方便的地方在于它能够根据索引级别进行聚合
columns = pd.MultiIndex.from_arrays([['US','US','CHINA','JP','JP'],
                                     [1,3,5,1,3]],
                                     names = ['city','tenor']
                                     )
hier_data = pd.DataFrame(np.random.randn(4,5), columns = columns)
print(hier_data)
# city         US               CHINA        JP          
# tenor         1         3         5         1         3
# 0      1.310638  1.543844 -0.529048 -0.656472 -0.201506
# 1     -0.700616  0.687138 -0.026076 -0.829758  0.296554
# 2     -0.312680 -0.611301 -0.821752  0.897123  0.136079
# 3     -0.258655  1.110766 -0.188424 -0.041489 -0.984792
city_count = hier_data.groupby(level='city', axis=1).count()
print(city_count)
# city  CHINA  JP  US
# 0         1   2   2
# 1         1   2   2
# 2         1   2   2
# 3         1   2   2

2.数据聚合

data = pd.DataFrame({'key1':['a','a','b','b','a'],
                     'key2':['one','two','one','two','one'],
                     'data1':np.random.randn(5),
                     'data2':np.random.randn(5)
                     })
print(data)
#       data1     data2 key1 key2
# 0  0.824188 -0.109497    a  one
# 1  0.479966  0.019028    a  two
# 2  1.173468 -0.943761    b  one
# 3  0.909048  0.640573    b  two
# 4 -0.571721 -0.786443    a  one
groups = data.groupby('key1')
print(groups['data1'].quantile(0.9))    #quantile函数, 样本分位数,是Series 的一个方法
# key1
# a    0.755344
# b    1.147026
# Name: data1, dtype: float64

使用自己的聚合函数, 需要将其传入到aggregate 或agg 方法中;

def peak_to_peak(arr):
    return arr.max()-arr.min()
print(groups['data2'].agg(peak_to_peak)) 
# key1
# a    0.805471
# b    1.584334
# Name: data2, dtype: float64
print(groups.describe())    #可以传入describe 函数

分组集运算和转换

使用transform 和 apply 方法

#       data1     data2 key1 key2
# 0  0.824188 -0.109497    a  one
# 1  0.479966  0.019028    a  two
# 2  1.173468 -0.943761    b  one
# 3  0.909048  0.640573    b  two
# 4 -0.571721 -0.786443    a  one
# 为dataframe 添加一个用于存放个索引分组平均值的列,一种方法是先聚合在合并
k1_means = data.groupby('key1').mean().add_prefix('mean_')
print(k1_means)
#       mean_data1  mean_data2
# key1                        
# a       0.244144   -0.292304
# b       1.041258   -0.151594
merge_data = pd.merge(data, k1_means, left_on = 'key1', right_index=True)   #进行聚合
print(merge_data)
#       data1     data2 key1 key2  mean_data1  mean_data2
# 0  0.824188 -0.109497    a  one    0.244144   -0.292304
# 1  0.479966  0.019028    a  two    0.244144   -0.292304
# 4 -0.571721 -0.786443    a  one    0.244144   -0.292304
# 2  1.173468 -0.943761    b  one    1.041258   -0.151594
# 3  0.909048  0.640573    b  two    1.041258   -0.151594

#使用people 数据,在groupby上使用transform 
#                a         b         c         d         e
# Joe     0.824188  0.479966  1.173468  0.909048 -0.571721
# Steve  -0.109497  0.019028 -0.943761  0.640573 -0.786443
# Wes     0.608870       NaN       NaN -0.736918 -0.298733
# Jim    -0.460587 -1.088793 -0.575771 -1.682901  0.229185
# Travis -1.756625  0.844633  0.277220  0.852902  0.194600

key = ['one','two','one','two','one']
by_key = people.groupby(key).mean()
print(by_key)
#             a         b         c         d         e
# one -0.107856  0.662299  0.725344  0.341677 -0.225285
# two -0.285042 -0.534882 -0.759766 -0.521164 -0.278629
transform_key = people.groupby(key).transform(np.mean)  #transform 会将一个函数应用到各个分组,然后将其放到合适位置
print(transform_key)
#                a         b         c         d         e
# Joe    -0.107856  0.662299  0.725344  0.341677 -0.225285
# Steve  -0.285042 -0.534882 -0.759766 -0.521164 -0.278629
# Wes    -0.107856  0.662299  0.725344  0.341677 -0.225285
# Jim    -0.285042 -0.534882 -0.759766 -0.521164 -0.278629
# Travis -0.107856  0.662299  0.725344  0.341677 -0.225285

def demean(arr):
    return arr - arr.mean()
demeaned = people.groupby(key).transform(demean)
print(demeaned)
#                a         b         c         d         e
# Joe     0.932044 -0.182333  0.448124  0.567371 -0.346437
# Steve   0.175545  0.553911 -0.183995  1.161737 -0.507814
# Wes     0.716726       NaN       NaN -1.078595 -0.073448
# Jim    -0.175545 -0.553911  0.183995 -1.161737  0.507814
# Travis -1.648770  0.182333 -0.448124  0.511224  0.419884

分位数和桶分析

print('分位数')
frame = pd.DataFrame({'data1':np.random.randn(1000),
                      'data2':np.random.randn(1000)
                      })
factor = pd.cut(frame.data1, 4)
#有cut返回的factor对象可以会直接用于Groupby
print(factor[:10])
def get_stats(group):
    return {'max':group.max(),'min':group.min(),'count':group.count(),
            'mean':group.mean()}
grouped = frame.data2.groupby(factor)
gro = grouped.apply(get_stats).unstack()    #将数据集分成4份,分别计算每一份中data2 的4 个统计量
print(gro)
#                   count       max      mean       min
# data1                                                
# (-3.202, -1.548]   65.0  2.279159  0.023773 -1.777999
# (-1.548, 0.099]   496.0  3.159843 -0.000178 -3.438710
# (0.099, 1.746]    402.0  2.744144 -0.048409 -2.652843
# (1.746, 3.394]     37.0  1.426442  0.051988 -2.210856

grouping = pd.qcut(frame.data1, 10, labels = False )
grouped1 = frame.data2.groupby(grouping)
gro2 = grouped1.apply(get_stats).unstack()
print(gro2)
#        count       max      mean       min
# data1                                     
# 0      100.0  3.159843  0.024953 -1.863054
# 1      100.0  3.006112 -0.044596 -2.744542
# 2      100.0  2.134254  0.016900 -2.827021
# 3      100.0  2.144788 -0.003929 -3.438710
# 4      100.0  2.106217  0.134829 -2.465120
# 5      100.0  1.611080 -0.215401 -2.773363
# 6      100.0  2.205833 -0.033742 -2.652843
# 7      100.0  2.744144  0.060229 -2.379128
# 8      100.0  2.318960 -0.007054 -2.165005
# 9      100.0  1.849263 -0.092987 -2.471711
一些pandas的应用
In [63]:

import numpy as np
import pandas as pd
np.random.seed(666)
df = pd.DataFrame(np.random.randn(7, 3))
df.iloc[:2,1] = np.nan
df
df.dropna(thresh=3)
Out[63]:
0	1	2
2	0.019028	-0.943761	0.640573
3	-0.786443	0.608870	-0.931012
4	0.978222	-0.736918	-0.298733
5	-0.460587	-1.088793	-0.575771
6	-1.682901	0.229185	-1.756625
In [64]:

df.drop_duplicates()
Out[64]:
0	1	2
0	0.824188	NaN	1.173468
1	0.909048	NaN	-0.109497
2	0.019028	-0.943761	0.640573
3	-0.786443	0.608870	-0.931012
4	0.978222	-0.736918	-0.298733
5	-0.460587	-1.088793	-0.575771
6	-1.682901	0.229185	-1.756625

# 用函数和映射来转换数据
In [71]:

data = pd.DataFrame({'food':['bacon','pulled pork', 'bacon','pastrami','corned beef','Bacon','pastrami',
                            'honey ham','nova lox'
                            ],'ounces':[4,3,12,6,7.5,8,3,5,6]})
data
Out[71]:
food	ounces
0	bacon	4.0
1	pulled pork	3.0
2	bacon	12.0
3	pastrami	6.0
4	corned beef	7.5
5	Bacon	8.0
6	pastrami	3.0
7	honey ham	5.0
8	nova lox	6.0

 增加一列, 表示每种肉的来源的动物
In [74]:

meat_from_animal = {'bacon':'pig',
                   'pulled pork':'pig',
                    'pastrami':'cow',
                    'corned beef':'cow',
                    'honey ham':'pig',
                    'nova lox':'salmon'
                   }
In [72]:

lowercased
lowercased = data['food'].str.lower()
lowercased
Out[72]:
0          bacon
1    pulled pork
2          bacon
3       pastrami
4    corned beef
5          bacon
6       pastrami
7      honey ham
8       nova lox
Name: food, dtype: object
使用map的映射的方法
In [75]:

data['animal'] = lowercased.map(meat_from_animal)
data
Out[75]:
food	ounces	animal
0	bacon	4.0	pig
1	pulled pork	3.0	pig
2	bacon	12.0	pig
3	pastrami	6.0	cow
4	corned beef	7.5	cow
5	Bacon	8.0	pig
6	pastrami	3.0	cow
7	honey ham	5.0	pig
8	nova lox	6.0	salmon
使用一个函数解决
In [77]:

data['food'].map(lambda x: meat_from_animal[x.lower()])
data
Out[77]:
food	ounces	animal
0	bacon	4.0	pig
1	pulled pork	3.0	pig
2	bacon	12.0	pig
3	pastrami	6.0	cow
4	corned beef	7.5	cow
5	Bacon	8.0	pig
6	pastrami	3.0	cow
7	honey ham	5.0	pig
8	nova lox	6.0	salmon
替换值
In [78]:

data
data = pd.Series([1, -999, -2,-999,-1000,3])
data
Out[78]:
0       1
1    -999
2      -2
3    -999
4   -1000
5       3
dtype: int64
In [83]:

data.replace(-999,np.nan)
Out[83]:
0       1.0
1       NaN
2      -2.0
3       NaN
4   -1000.0
5       3.0
dtype: float64
In [84]:

data.replace([-999,-1000],[0,np.nan])
Out[84]:
0    1.0
1    0.0
2   -2.0
3    0.0
4    NaN
5    3.0
dtype: float64
In [85]:

data.replace({-999:np.nan,-1000:10})
Out[85]:
0     1.0
1     NaN
2    -2.0
3     NaN
4    10.0
5     3.0
dtype: float64
离散化和装箱 cut , value_counts , codes, categories
In [86]:

data
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18,25,35,60,100]
data = pd.cut(ages, bins)
data
Out[86]:
[(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
Length: 12
Categories (4, interval[int64]): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]
In [88]:

data.codes
Out[88]:
array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)
In [90]:

pd.value_counts(data)
Out[90]:
(18, 25]     5
(35, 60]     3
(25, 35]     3
(60, 100]    1
dtype: int64
设定bin 的名字
In [92]:

data
group_names = ['Youth','YoungAdult','Middleage','Senior']
data = pd.cut(ages, bins , labels = group_names)
data
Out[92]:
[Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, Middleage, Middleage, YoungAdult]
Length: 12
Categories (4, object): [Youth < YoungAdult < Middleage < Senior]
In [93]:

pd.value_counts(data)
Out[93]:
Youth         5
Middleage     3
YoungAdult    3
Senior        1
dtype: int64
qcut 按照数据的分位数来分箱, 取决于数据的分布, 按照百分比进行切分
In [94]:

cats
np.random.seed(666)
data = np.random.randn(1000)
cats = pd.qcut(data, 4)
cats
Out[94]:
[(0.688, 3.394], (-0.0458, 0.688], (0.688, 3.394], (0.688, 3.394], (-0.679, -0.0458], ..., (-0.0458, 0.688], (0.688, 3.394], (-0.679, -0.0458], (-3.197, -0.679], (-0.679, -0.0458]]
Length: 1000
Categories (4, interval[float64]): [(-3.197, -0.679] < (-0.679, -0.0458] < (-0.0458, 0.688] < (0.688, 3.394]]
In [95]:

pd.value_counts(cats)
Out[95]:
(0.688, 3.394]       250
(-0.0458, 0.688]     250
(-0.679, -0.0458]    250
(-3.197, -0.679]     250
dtype: int64

层次化索引:(hierarchical indexing)它使我们能在一个轴上拥有多个(两个以上)的索引级别

print('层次性索引')
np.random.seed(666)
data = pd.Series(np.random.randn(10), index = [['a','a','a','b','b','b','c','c','d','d'],
                                               [1,2,3,1,2,3,1,2,2,3]
                                               ])
print(data)
# a  1    0.824188
#    2    0.479966
#    3    1.173468
# b  1    0.909048
#    2   -0.571721
#    3   -0.109497
# c  1    0.019028
#    2   -0.943761
# d  2    0.640573
#    3   -0.786443
# dtype: float64

#选取数据子集
print(data['a'])
# 1    0.824188
# 2    0.479966
# 3    1.173468
# dtype: float64
print(data.ix[['a','b']])
print(data[:,2])    #在内层进行选择
# a    0.479966
# b   -0.571721
# c   -0.943761
# d    0.640573
# dtype: float64

#可以通过使用unstack 的方法,使其转换成一个DataFrame
print(data.unstack())
#           1         2         3
# a  0.824188  0.479966  1.173468
# b  0.909048 -0.571721 -0.109497
# c  0.019028 -0.943761       NaN
# d       NaN  0.640573 -0.786443
print(data.unstack().stack()) #unstack 的逆运算是 stack

#对于一个DataFrame , 每一条轴可以使用分层索引
frame = pd.DataFrame(np.arange(12).reshape((4,3)), index = [['a','a','b','b'],[1,2,1,2]],
                     columns = [['Ohio','Ohio','Colorado'],['Green','Red','Green']]
                     )
print(frame)
#      Ohio     Colorado
#     Green Red    Green
# a 1     0   1        2
#   2     3   4        5
# b 1     6   7        8
#   2     9  10       11
frame.index.names = ['key1','key2']
frame.columns.names = ['state','color']
print(frame)
# print(frame.key1)    #制定了每行每列的名称, 就会显示在控制台输出,但是不雅将索引名和轴标签混为一谈
# color     Green Red    Green
# key1 key2                   
# a    1        0   1        2
#      2        3   4        5
# b    1        6   7        8
#      2        9  10       11

重排分级顺序,swaplevel 接受两个级别 的编号或名称, 并返回一个互换了级别的新对象,数据不会发生变化

frame  = frame.swaplevel('key1','key2')
print(frame)
# state      Ohio     Colorado
# color     Green Red    Green
# key2 key1                   
# 1    a        0   1        2
# 2    a        3   4        5
# 1    b        6   7        8
# 2    b        9  10       11
frame = frame.sortlevel(0)
print(frame)
# state      Ohio     Colorado
# color     Green Red    Green
# key2 key1                   
# 1    a        0   1        2
#      b        6   7        8
# 2    a        3   4        5
#      b        9  10       11
#根据级别汇总进行统计
sum_key2 = frame.sum(level = 'key2')
print(sum_key2)
# state  Ohio     Colorado
# color Green Red    Green
# key2                    
# 1         6   8       10
# 2        12  14       16
sum_color = frame.sum(level='color', axis =1)
print(sum_color)
# color      Green  Red
# key2 key1            
# 1    a         2    1
#      b        14    7
# 2    a         8    4
#      b        20   10

使用DataFrme 的列

frame = pd.DataFrame({'a':range(7),'b':range(7,0,-1),
                      'c':['one','one','one','two','two','two','two'],
                      'd':[0,1,2,0,1,2,3]
                      })
print(frame)
#    a  b    c  d
# 0  0  7  one  0
# 1  1  6  one  1
# 2  2  5  one  2
# 3  3  4  two  0
# 4  4  3  two  1
# 5  5  2  two  2
# 6  6  1  two  3
frame2 = frame.set_index(['c','d']) 
#将c, d列转换成行索引, 并创建一个新的DataFrame ,c,d 列会从dataFrame上移除,
#但是也可以使其保留下来  使用drop = False

print(frame2)
#        a  b
# c   d      
# one 0  0  7
#     1  1  6
#     2  2  5
# two 0  3  4
#     1  4  3
#     2  5  2
#     3  6  1

frame3 = frame.set_index(['c','d'],drop= False) #保留c,d 行
print(frame3)
#        a  b    c  d
# c   d              
# one 0  0  7  one  0
#     1  1  6  one  1
#     2  2  5  one  2
# two 0  3  4  two  0
#     1  4  3  two  1
#     2  5  2  two  2
#     3  6  1  two  3

透视表:pivot_table

猜你喜欢

转载自blog.csdn.net/qq_40587575/article/details/81169502