pandas 常用语句

pandas的功能非常强大,支持类似与sql的数据增、删、查、改,并且带有丰富的数据处理函数;

支持时间序列分析功能;支持灵活处理缺失数据等。
pandas的基本数据结构是Series和DataFrame
Series是序列,类似一维数组;
DataFrame相当于一张二维表格,类似二维数组,它的每一列都是一个Series。
为了定位Series中的元素,Pandas提供了Index对象,每个Series都会带有一个对应的
Index,用来标记不同的元素,Index的内容不一定是数字,也可以是字母、中文等,它类似于sql中的
主键。
 
DataFrame相当于多个带有同样Index的Series的组合(本质是Series的容器),每个Series都带
有唯一的表头,用来标识不同的Series。
>>> import pandas as pd 
>>> s=pd.Series([1,2,3],index=['a','b','c'])
>>> s
a    1
b    2
c    3
>>> d=pd.DataFrame([[1,2,3],[4,5,6]],columns=['a','b','c'])
>>> d.head()
   a  b  c
0  1  2  3
1  4  5  6
>>> d.describe()
             a        b        c
count  2.00000  2.00000  2.00000
mean   2.50000  3.50000  4.50000
std    2.12132  2.12132  2.12132
min    1.00000  2.00000  3.00000
25%    1.75000  2.75000  3.75000
50%    2.50000  3.50000  4.50000
75%    3.25000  4.25000  5.25000
max    4.00000  5.00000  6.00000
>>> pd.read_excel('C:\\Users\someone\Desktop\data.xlsx','Sheet1')
               id       int  no    4       5         6   7    8
0       elec_code   varchar  no   50    电子表码   varchar  no  100
1         user_id   varchar  no   50    用户编号   varchar  no  100
2       user_name   varchar  no   50    用户名称   varchar  no  100
 
写入excel
with pd.ExcelWriter('shanghai_%d.xlsx'%iii) as writer:
for i,j in dddit:
j.to_excel(writer,sheet_name=str(i))
#j 为DataFrame类型数据
 
定位dataframe中元素
In [14]: df.head()
Out[14]:   A B C D 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 2013-01-04 0.721555 -0.706771 -1.039575 0.271860 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
 
 
dates=['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', '2013-01-06']


df[0:3] 通过【】切片列 ,axis=0 左闭右开 Out[24]: A B C D 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 In [25]: df['20130102':'20130104'] 两边包含 Out[25]: A B C D 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
 df['A'] 选择单个列,等同于df.A
Out[23]:  2013-01-01 0.469112 2013-01-02 1.212112 2013-01-03 -0.861849 2013-01-04 0.721555 2013-01-05 -0.424972 2013-01-06 -0.673690 Freq: D, Name: A, dtype: float64


df.loc[] 根据数据的索引值(标签) 定位数据
 
df.loc[dates[0]]
df.loc[:,['A','B']]
df.loc['20130102':'20130104',['A','B']]
                   A         B
2013-01-02  1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04  0.721555 -0.706771
df.loc['20130102',['A','B']]
df.loc[dates[0],'A'] Out[30]: 0.4691122

df.iloc[] 根据数据的位置序号定位数据,而不是索引的值
当入参为1个时,表示纵轴序号值为 y 的行,入参为两个时(x,y),表示横轴上序号为x,纵轴上序号为y的子集
分号 :同列表,左闭右开
d.iloc[1:2,[1,2]]   
df.iloc[3:5,0:2]
df.iloc[1:3,:]
df.iloc[1,1]
df.iloc[[1,2,4],[0,2]] 返回行号为1,2,4,列号为0,2的子集
df.iloc[3] 返回序号值为3的行



d.index 返回索引明细
d.dtypes 返回各列(column)的类型及名称
 
填充空值
d=d.fillna('_')将NA以'_'值替换

排序

通过索引排序,默认是纵轴索引值,升序

df.sort_index(axis=0,ascending=True) 

通过数值排序

df.sort_values(by,axis=0,ascending=True)

by可以是单个列标签,也可以是多个列标签的列表

 
合并DataFrame
 
merge 原理像sql 的两表关联 join
pd1=pd.DataFrame(list1,columns=['userid',])
pd2=pd.DataFrame(list2,columns=['r','userid2','filialename','username','useraddress',])
pd3=pd.merge(pd1,pd2,how='left',left_on='userid',right_on='userid2')
how,连接方式'left','right','inner'
使用左边的userid列和右边的userid2列作为连接键即userid=userid2
想根据某列的不同的值,创建出另一对应值的列,可用merge方法,连接两个df
 
concat 直接拼接合并
dfs=[pd1,pd2,pd3]
datas=pd.concat(dfs,axis=1)
axis为1时,横向连接 datas.columns  为 ['userid','r','userid2','filialename','username','useraddress','userid','r','userid2','filialename','username','useraddress',]
axis为0时,纵向连接 相当于union all

 

 
统计 频率数
s=datas['filialename']
s.value_counts()

 

groupby,分组
类似sql的group by ,可以根据多个字段group by ,用列表
grouped=data.groupby('Fmfiliale')
#默认是 axis=0,即纵向分组
print(grouped.groups)
#结果如下:
#{'118.190.41.176:water': Int64Index([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
#            17, 18, 19, 20, 21, 22, 23, 24],
#            dtype='int64'),
# '120.237.48.43:water': Int64Index([25], dtype='int64'),
# '222.245.76.42:water': Int64Index([26, 27, 28, 29], dtype='int64')}
for name,group in grouped:
print(name,grouped)
返回的是(str,pd)类型数据,上例中的name值为改组的Fmfiliale值。
对于空值(#NA)groupby会自动排除这一条数据

 

读取excel详细入参说明

pandas.read_excel(iosheetname=0header=0skiprows=Noneskip_footer=0index_col=None,parse_cols=Noneparse_dates=Falsedate_parser=Nonena_values=Nonethousands=Noneconvert_float=True,has_index_names=Noneconverters=Noneengine=None**kwds)

Read an Excel table into a pandas DataFrame

Parameters:

io : string, file-like object, pandas ExcelFile, or xlrd workbook.

The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could befile://localhost/path/to/workbook.xlsx

sheetname : string, int, mixed list of strings/ints, or None, default 0  表示读取哪几个工作簿,从0开始

Strings are used for sheet names, Integers are used in zero-indexed sheet positions.

Lists of strings/integers are used to request multiple sheets.

Specify None to get all sheets.

str|int -> DataFrame is returned. list|None -> Dict of DataFrames is returned, with keys representing sheets.

Available Cases

  • Defaults to 0 -> 1st sheet as a DataFrame
  • 1 -> 2nd sheet as a DataFrame
  • “Sheet1” -> 1st sheet as a DataFrame
  • [0,1,”Sheet5”] -> 1st, 2nd & 5th sheet as a dictionary of DataFrames
  • None -> All sheets as a dictionary of DataFrames

header : int, list of ints, default 0  将某一行设置为标题行,计数从0开始,在跳过行之后重新计数

Row (0-indexed) to use for the column labels of the parsed DataFrame. If a list of integers is passed those row positions will be combined into a MultiIndex

skiprows : list-like  从开头起,跳过哪几行

Rows to skip at the beginning (0-indexed)

skip_footer : int, default 0从尾端起,跳过哪几行

Rows at the end to skip (0-indexed)

index_col : int, list of ints, default None  将某一列设置为索引,从0开始计数

Column (0-indexed) to use as the row labels of the DataFrame. Pass None if there is no such column. If a list is passed, those columns will be combined into a MultiIndex

converters : dict, default None 以列名为键,函数为值,对该列的值应用该函数,取结果

Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the Excel cell content, and return the transformed content.

parse_cols : int or list, default None 解析哪几列,'A:E'表示解析A列到E列(含)

  • If None then parse all columns,
  • If int then indicates last column to be parsed
  • If list of ints then indicates list of column numbers to be parsed
  • If string then indicates comma separated list of column names and column ranges (e.g. “A:E” or “A,C,E:F”)

na_values : list-like, default None  列表,如遇到列表中的值,将其读为na

List of additional strings to recognize as NA/NaN

thousands : str, default None

Thousands separator for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format.

keep_default_na : bool, default True

If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they’re appended to

verbose : boolean, default False

Indicate number of NA values placed in non-numeric columns

engine: string, default None

If io is not a buffer or path, this must be set to identify io. Acceptable values are None or xlrd

convert_float : boolean, default True

convert integral floats to int (i.e., 1.0 –> 1). If False, all numeric data will be read in as floats: Excel stores all numbers as floats internally

has_index_names : boolean, default None

DEPRECATED: for version 0.17+ index names will be automatically inferred based on index_col. To read Excel output from 0.16.2 and prior that had saved index names, use True.

Returns:

parsed : DataFrame or Dict of DataFrames

DataFrame from the passed in Excel file. See notes in sheetname argument for more information on when a Dict of Dataframes is returned.

猜你喜欢

转载自www.cnblogs.com/Ting-light/p/9296107.html