1001系列之pandas0002如何从CSV&text files文件中导入导出数据

一、pd.read_csv()模块简介

数据挖掘任务的数据源不仅来自数据库,也可能来自已经整理好的表格等结构化数据和网页等非结构化数据。本节内容主要讲如何从CSV等text file中导入数据。当任务需求不同时,可以定制导入到Python中的数据,甚至当数据量过大时,还需要考虑分批导入或者转换数据类型以减少占用内存空间。

#导入pandas库
import pandas as pd
import numpy as np
from io import StringIO

二、pd.read_csv()的参数

2.1 基本参数

1.filepath_or_buffervarious:various
#文件路径:变量,可以输入文件路径,也可以输入URL,输入路径时,注意用双\\关闭转义符,或者在路径前加"r";

2.sep:str, defaults to ',' for read_csv(), '\t' for read_table()
#分隔符:字符串,默认逗号来分割CSV文件,可以传入'\t'来分割以TAB为分隔符的文件;如果存在多个分割符,会自动进行正则式分割

3.delimiter:str, default None     
#分割符的名字,作用跟sep一样

4.delim_whitespace:boolean, default False               
#指定空格符为分割符,相当于sep='\s+'
#实例
dt1 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",sep =",")
dt1.head(2)
1 14.23 1.71 2.43 15.6 127 2.8 3.06 .28 2.29 5.64 1.04 3.92 1065
0 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050
1 1 13.16 2.36 2.67 18.6 101 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185

2.2 列和索引的定位与命名

5.header:int or list of ints, default 'infer'
#当选择默认值或header=0时,将首行设为列名。如果列名被传入明确值就令header=None。注意,当header=0时,即使列名被传参也会被覆盖。
#在列名指定时,若某列未被指定,读取时将跳过该列。
#如果 skip_blank_lines=True,此参数将忽略空行和注释行, 因此 header=0 表示第一行数据而非文件的第一行.

6.names:array-like, default None
#列名列表的使用. 如果文件不包含列名,那么应该设置header=None。 列名列表中不允许有重复值.

7.usecols:list-like or callable, default None
#返回列名列表的子集. 如果该参数为列表形式, 那么所有元素应全为位置(即文档列中的整数索引)或者 全为相应列的列名字符串(这些列名字符串为names参数给出的或者文档的header行内容).例如,一个有效的列表型参数 usecols 将会是是 [0, 1, 2] 或者 ['foo', 'bar', 'baz'].
#元素顺序可忽略,因此 usecols=[0,1]等价于 [1, 0]。
#如果使用callable的方式, 可调用函数将根据列名计算, 返回可调用函数计算结果为True的名称.

8.squeeze:boolean, default False
#如果解析的数据仅包含一个列,那么结果将以 Series的形式返回.

9.prefix:str, default None
#当没有header时,可通过该参数为数字列名添加前缀, e.g. ‘X’ for X0, X1, …

10.mangle_dupe_cols:boolean, default True
#当列名有重复时,解析列名将变为 ‘X’, ‘X.1’…’X.N’而不是 ‘X’…’X’。 如果该参数为 False ,那么当列名中有重复时,前列将会被后列覆盖。
#实例
dt2 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None)  #不将第一列当做列名
                  #当传入header=0时,把第一行当做列名,它不等同于header=None
dt2.head(2)
0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065
1 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050
#如果列名不在第一行,而是在其他行,也可以指定
data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
pd.read_csv(StringIO(data), header=1)
a b c
0 1 2 3
1 4 5 6
2 7 8 9
#实例
dt3 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                 prefix= "X_")    #为数字列名添加前缀
dt3.head(2)
X_0 X_1 X_2 X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_10 X_11 X_12 X_13
0 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065
1 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050
dt3.columns
Index(['X_0', 'X_1', 'X_2', 'X_3', 'X_4', 'X_5', 'X_6', 'X_7', 'X_8', 'X_9',
       'X_10', 'X_11', 'X_12', 'X_13'],
      dtype='object')
#实例
dt4 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                  names=dt1.columns)  #当header=None,指定列名列表
dt4.head(2)
X_0 X_1 X_2 X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_10 X_11 X_12 X_13
0 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065
1 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050
#usecols可以过滤掉某些列不用导入
#实例
dt5 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                  usecols=[0,1,2,5,8])  #当header=None,指定列名列表,如果不是全部列名,会跳过未指定的列,元素顺序会忽略
dt5.head(4)
0 1 2 5 8
0 1 14.23 1.71 127 0.28
1 1 13.20 1.78 100 0.26
2 1 13.16 2.36 101 0.30
3 1 14.37 1.95 113 0.24
#usecols中还可以调用函数
data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
#重复列名解析
data = 'a,b,a\n0,1,2\n3,4,5'
pd.read_csv(StringIO(data), mangle_dupe_cols=True) 
#当将该参数设置为False时提示为防止用户滥用不支持列重复,为True时自动为第二个列添加“.1”等后缀
a b a.1
0 0 1 2
1 3 4 5
#skip_blank_lines=False,则read_csv不会忽略空白行
data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
pd.read_csv(StringIO(data), skip_blank_lines=False)
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0

2.3 选取数据导入

10.skiprows:list-like or integer, default None
#从头(0)开始,跳过多少行导入,也可以传入一个调用对象

11.nrows:int, default None
#指定导入多少行数据,这对大数据集的片段有帮助
#实例
dt6 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                  usecols=[0,1,2,5,8],
                  skiprows=lambda x: x % 2 != 0)  #跳过多少行导入
0 1 2 5 8
0 1 14.23 1.71 127 0.28
1 1 13.16 2.36 101 0.30
#实例
dt7 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                  usecols=[0,1,2,5,8],
                  usecols=[0,1,5],
                  nrows=4)  #跳过多少行导入
len(dt7)
4

2.4 常规解析设置

12.engine :{
    
    'c', 'python'}
#解析引擎的使用。 尽管C引擎速度更快,但是目前python引擎功能更加完美。

13.low_memory:boolean, default True
#内部对文件进行大块处理,从而在解析时减少了内存使用,但可能是混合类型推断。 要确保没有混合类型,请设置False或使用dtype参数指定类型。 
#请注意,无论使用chunksize还是iterator参数以块形式返回数据,整个文件都将被读取到单个DataFrame中。 (仅对C解析器有效)

14.dtype:Type name or dict of column -> type, default None
#指定列的数据类型,比如{'a': np.float64, 'b': np.int32} 

15.converters:dict, default None
#字典类型为指定的列指定数据类型,作用同dtype
#实例
dt8 = pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv",
                  sep =",",
                  header=None,
                  usecols=[0,1,5],
                  dtype = {
    
    0: np.int32,1: np.float64, 5: np.int32}  )  #跳过多少行导入
dt8.dtypes
0      int32
1    float64
5      int32
dtype: object
#实例
#用converters转换数据类型
data = "col_1\n1\n2\n'A'\n4.22"
dt9 = pd.read_csv(StringIO(data), converters={
    
    "col_1": str})
dt9
print(open("temp2.sv").read())
:0:1:2:3
0:0.4691122999071863:-0.2828633443286633:-1.5090585031735124:-1.1356323710171934
1:1.2121120250208506:-0.17321464905330858:0.11920871129693428:-1.0442359662799567
2:-0.8618489633477999:-2.1045692188948086:-0.4949292740687813:1.071803807037338
3:0.7215551622443669:-0.7067711336300845:-1.0395749851146963:0.27185988554282986
4:-0.42497232978883753:0.567020349793672:0.27623201927771873:-1.0874006912859915
5:-0.6736897080883706:0.1136484096888855:-1.4784265524372235:0.5249876671147047
6:0.4047052186802365:0.5770459859204836:-1.7150020161146375:-1.0392684835147725
7:-0.3706468582364464:-1.1578922506419993:-1.344311812731667:0.8448851414248841
8:1.0757697837155533:-0.10904997528022223:1.6435630703622064:-1.4693879595399115
9:0.35702056413309086:-0.6746001037299882:-1.776903716971867:-0.9689138124473498
pd.read_csv("temp2.sv", sep=None, engine="python")
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914

2.5 缺失值的解析

16.na_values:setcalar, str, list-like, or dict, default None
#The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].

17.keep_default_na:boolean, default True
#If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing.
#If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing.
#If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing.
#If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.    

18.skip_blank_lines:boolean, default True
#如果为真,则跳过空白行而不是解析为空值
#实例
pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv", 
            keep_default_na=False, 
            na_values=[""])
pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv", 
            keep_default_na=False, 
            na_values=["NA", "0"])

2.6 日期类型的解析

19.parse_dates:boolean or list of ints or names or list of lists or dict, default False.
#If True -> try parsing the index.
#If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
#If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
#If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’. A fast-path exists for iso8601-formatted dates.
#由于csv文件中日期和时间被分为了两列,pd.read_csv命令读取文件时,需指定parse_dates = [ ['Date', 'Time'] ],
#亦即将[ ['Date', 'Time'] ]两列的字符串先合并后解析方可。合并后的新列会以下划线'_'连接原列名命名,
#本例中列名为'Date_Time'。解析得到的日期格式列会作为DataFrame的第一列,在index_col指定表格中的第几列作为Index时需要小心。
#如本例中,指定参数index_col = 0,则此时会以新生曾的Date_Time列而不是IncidntNum作为Index。因此保险的方法是指定列名,如index_col = 'IncidntNum'。

20.infer_datetime_format:boolean, default False
#如果该参数为真,且parse_dates指定列时,可以提高处理速度

21.keep_date_col:boolean, default False
#解析日期后是否保留原始列

22.date_parser:function, default None
#转换字符串序列或者日期数组,默认使用第三方库dateutil.parser.parser,这个库可以解析所有人类可识别的日期
#实例
dt10 = pd.read_clipboard(header = None)
dt10
0 1 2 3
0 KORD,19990127, 19:00:00, 18:56:00, 0.81
1 KORD,19990127, 20:00:00, 19:56:00, 0.01
2 KORD,19990127, 21:00:00, 20:56:00, -0.59
3 KORD,19990127, 21:00:00, 21:18:00, -0.99
4 KORD,19990127, 22:00:00, 21:56:00, -0.59
5 KORD,19990127, 23:00:00, 22:56:00, -0.59
dt10.to_csv("tmp.csv")
#实例
dt11 = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
dt11
1_2 1_3 0 4
0 0 1 0 2 NaN 3.00
1 KORD,19990127, 19:00:00, KORD,19990127, 18:56:00, 0.0 0.81
2 KORD,19990127, 20:00:00, KORD,19990127, 19:56:00, 1.0 0.01
3 KORD,19990127, 21:00:00, KORD,19990127, 20:56:00, 2.0 -0.59
4 KORD,19990127, 21:00:00, KORD,19990127, 21:18:00, 3.0 -0.99
5 KORD,19990127, 22:00:00, KORD,19990127, 21:56:00, 4.0 -0.59
6 KORD,19990127, 23:00:00, KORD,19990127, 22:56:00, 5.0 -0.59
dt12 = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True)
dt12
1_2 1_3 0 1 2 3 4
0 0 1 0 2 NaN 0 1 2 3.00
1 KORD,19990127, 19:00:00, KORD,19990127, 18:56:00, 0.0 KORD,19990127, 19:00:00, 18:56:00, 0.81
2 KORD,19990127, 20:00:00, KORD,19990127, 19:56:00, 1.0 KORD,19990127, 20:00:00, 19:56:00, 0.01
3 KORD,19990127, 21:00:00, KORD,19990127, 20:56:00, 2.0 KORD,19990127, 21:00:00, 20:56:00, -0.59
4 KORD,19990127, 21:00:00, KORD,19990127, 21:18:00, 3.0 KORD,19990127, 21:00:00, 21:18:00, -0.99
5 KORD,19990127, 22:00:00, KORD,19990127, 21:56:00, 4.0 KORD,19990127, 22:00:00, 21:56:00, -0.59
6 KORD,19990127, 23:00:00, KORD,19990127, 22:56:00, 5.0 KORD,19990127, 23:00:00, 22:56:00, -0.59
date_spec = {
    
    "nominal": [1, 2], "actual": [1, 3]}
dt13 = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
dt13
nominal actual 0 4
0 0 1 0 2 NaN 3.00
1 KORD,19990127, 19:00:00, KORD,19990127, 18:56:00, 0.0 0.81
2 KORD,19990127, 20:00:00, KORD,19990127, 19:56:00, 1.0 0.01
3 KORD,19990127, 21:00:00, KORD,19990127, 20:56:00, 2.0 -0.59
4 KORD,19990127, 21:00:00, KORD,19990127, 21:18:00, 3.0 -0.99
5 KORD,19990127, 22:00:00, KORD,19990127, 21:56:00, 4.0 -0.59
6 KORD,19990127, 23:00:00, KORD,19990127, 22:56:00, 5.0 -0.59
date_spec = {
    
    "nominal": [1, 2], "actual": [1, 3]}
dt14 = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec,index_col=0)
dt14
actual 0 4
nominal
0 1 0 2 NaN 3.00
KORD,19990127, 19:00:00, KORD,19990127, 18:56:00, 0.0 0.81
KORD,19990127, 20:00:00, KORD,19990127, 19:56:00, 1.0 0.01
KORD,19990127, 21:00:00, KORD,19990127, 20:56:00, 2.0 -0.59
KORD,19990127, 21:00:00, KORD,19990127, 21:18:00, 3.0 -0.99
KORD,19990127, 22:00:00, KORD,19990127, 21:56:00, 4.0 -0.59
KORD,19990127, 23:00:00, KORD,19990127, 22:56:00, 5.0 -0.59
#日期解析功能
import dateutil
date_spec = {
    
    "nominal": [1, 2], "actual": [1, 3]}
dt15 = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec,date_parser=pd.to_datetime)
dt15

2.7 迭代分块导入数据

23.iterator:boolean, default False
#如果为真,返回一个文本文件迭代对象

24.chunksize:int, default None
#设定分块的大小
with pd.read_csv(r"D:\6_DataAnalysis\8_Datacleaning\##3Python_elements\wine_data.csv", sep=",",usecols=[0,1,2,3,4],nrows=20,chunksize=4) as reader:
    reader
    for chunk in reader:
        print(chunk)
        print("*"*30)
   1  14.23  1.71  2.43  15.6
0  1  13.20  1.78  2.14  11.2
1  1  13.16  2.36  2.67  18.6
2  1  14.37  1.95  2.50  16.8
3  1  13.24  2.59  2.87  21.0
******************************
   1  14.23  1.71  2.43  15.6
4  1  14.20  1.76  2.45  15.2
5  1  14.39  1.87  2.45  14.6
6  1  14.06  2.15  2.61  17.6
7  1  14.83  1.64  2.17  14.0
******************************
    1  14.23  1.71  2.43  15.6
8   1  13.86  1.35  2.27  16.0
9   1  14.10  2.16  2.30  18.0
10  1  14.12  1.48  2.32  16.8
11  1  13.75  1.73  2.41  16.0
******************************
    1  14.23  1.71  2.43  15.6
12  1  14.75  1.73  2.39  11.4
13  1  14.38  1.87  2.38  12.0
14  1  13.63  1.81  2.70  17.2
15  1  14.30  1.92  2.72  20.0
******************************
    1  14.23  1.71  2.43  15.6
16  1  13.83  1.57  2.62  20.0
17  1  14.19  1.59  2.48  16.5
18  1  13.64  3.10  2.56  15.2
19  1  14.06  1.63  2.28  16.0
******************************

2.8 错误处理

25.error_bad_lines:boolean, default True
#默认情况下,字段太多的行(例如,带有太多逗号的csv行)会引发异常,并且不会返回任何DataFrame。 
#如果为False,则这些“坏行”将从返回的DataFrame中删除。

26.warn_bad_lines:boolean, default True
#警告有坏行,但仍然会输出

2.9 未编码数据处理

from io import BytesIO
data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
data = data.decode("utf8").encode("latin-1")
dt16 = pd.read_csv(BytesIO(data), encoding="latin-1")
dt16
word length
0 Träumen 7
1 Grüße 5

三、pd.to_csv()从python中导出数据

#参数如下
pd.to_csv(path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''          
          sep : Field delimiter for the output file (default “,)
          na_rep: A string representation of a missing value (default ‘’)
          float_format: Format string for floating point numbers
          columns: Columns to write (default None)
          header: Whether to write out the column names (default True)
          index: whether to write row (index) names (default True)
          index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).
          mode : Python write mode, default ‘w’
          encoding: a string representing the encoding to use if the contents are non-ASCII, for Python versions prior to 3
          line_terminator: Character sequence denoting line end (default os.linesep)
          quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
          quotechar: Character used to quote fields (default ‘”’)
          doublequote: Control quoting of quotechar in fields (default True)
          escapechar: Character used to escape sep and quotechar when appropriate (default None)
          chunksize: Number of rows to write at a time
          date_format: Format string for datetime objects)

猜你喜欢

转载自blog.csdn.net/lqw844597536/article/details/116998492
今日推荐