pg导数据双引号设置/pyspark的Dataframe修改列名

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/sinat_26566137/article/details/82255620

导数据
导数据并不需要先导到hdfs上,然后写脚本处理,再保存成想要的结果;
通常如果能直接在数据库实现(比如将列名改成中文名的功能,甚至两张表的join),则直接在导数据的命令\copy的过程中用sql语句实现就行,通常用with还可以实现格式设置;

\copy (select * from judgedoc limit 10) to '/home/sc/Downloads/tmp/judgedoc_tmp.csv' with (
FORMAT csv,
DELIMITER ',',
escape '\\',
header true,
quote '"',
FORCE_QUOTE *,
encoding 'UTF-8');

\copy judgedoc to '/home/scdata/tmp/judgedoc_new.csv' with (FORMAT csv,DELIMITER ',',escape '\',header true,quote '"',force_quote *,encoding 'UTF-8');

\copy judgedoc_litigant to '/home/scdata/tmp/judgedoc_litigant_new.csv' with (FORMAT csv,DELIMITER ',',escape '\',header true,quote '"',force_quote *,encoding 'UTF-8');

\copy court_shixin_company_new to '/home/scdata/tmp/court_shixin_company_new.csv' with (FORMAT csv,DELIMITER ',',escape '\',header true,quote '"',force_quote *,encoding 'UTF-8');


\copy (select id as ID,company_name as 公司名称,dianxin_com_doc_id.doc_id as 文件ID,case_reason as 案件原 由,case_type as 案件类型,url as 裁判文书网址,title as 标题,court as 法院,publish_date as 公布日期,trail_date as 立案日期,judge_process as 裁判过程,legal_source as 依据法律,update_at as 更新时间 from dianxin_com_doc_id left join judgedoc on dianxin_com_doc_id.doc_id = judgedoc.doc_id) to '/home/scdata/tmp/dianxin_com_judgedoc_detail.csv'  with (FORMAT csv,DELIMITER ',',escape '\',header true,quote '"',force_quote *,encoding 'UTF-8');

pyspark读数据并且修改列名

def test_shixin(spark,sc):
    sqlContext = SQLContext(sparkContext=sc)
    dfkk2 = sqlContext.read.csv(
        dianxin_com,header = True)
    dfkk2.createOrReplaceTempView('y2')

    dfkk4 = sqlContext.read.csv(
        shixin_company, header=True,sep=',')
    dfkk4.createOrReplaceTempView('y4')
    dfhh2 = sqlContext.sql(
        """select y2.company_name as company_name_a,y4.* from y2 left join y4 on y4.company_name = y2.company_name
         """)
    dfhh2.createOrReplaceTempView('hh2')
    dfhh3 = sqlContext.sql(
        """select company_name_a,hash_id,duty,business_entity,unperform_part,court_name,case_code,performance,gist_unit, gist_id,area_name,performed_part,
         party_type_name,create_at,publish_date,reg_date,disrupt_type_name,update_at,judgedoc_doc_id,case_status
        from hh2
         """)  #193157
    dfhh3 = dfhh3.withColumnRenamed('company_name_a','公司名称')\
                    .withColumnRenamed('hash_id','映射ID')\
                    .withColumnRenamed('duty','应该履行的职责')\
                    .withColumnRenamed('business_entity','公司负责人')\
                    .withColumnRenamed('unperform_part','未履行部分')\
                    .withColumnRenamed('court_name','法院名称')\
                    .withColumnRenamed('case_code','案号')\
                    .withColumnRenamed('performance','履行情况')\
                    .withColumnRenamed('gist_unit','执行依据法院')\
                    .withColumnRenamed('gist_id','依据文书')\
                    .withColumnRenamed('focus_number','color2')\
                    .withColumnRenamed('area_name','地区')\
                    .withColumnRenamed('performed_part','履行部分')\
                    .withColumnRenamed('party_type_name','自然人')\
                    .withColumnRenamed('create_at','入库时间')\
                    .withColumnRenamed('publish_date','公布日期')\
                    .withColumnRenamed('reg_date','立案时间')\
                    .withColumnRenamed('disrupt_type_name','失信具体表现')\
                    .withColumnRenamed('party_type_name','自然人')\
                    .withColumnRenamed('update_at','更新时间')\
                    .withColumnRenamed('judgedoc_doc_id','裁判文书ID')\
                    .withColumnRenamed('case_status','案件公示状态')
    # dfhh3.show()
    # dfkk2.show()
    # dfkk4.show()
    # dfhh2.show()
    # spark.stop()

    dfhh3.repartition(1).write.csv("hdfs://192.168.31.10:9000/hdfs/tmp/statistic/dianxin_com_shixin.csv",
                                   mode='overwrite', header=True, quoteAll = True,sep = ',')
    spark.stop()

pg库为只读库,但是也可以在库里面进行数据库表的创建,然后以表中 的数据字段作为内容;只是不能从外面导数据进行,即执行\copy to命令;

create table dianxin_com_doc_id as (select id,company_name,judgedoc_litigant.doc_id from tmp_189 left join judgedoc_litigant on company_name = judgedoc_litigant.litigant_name);

\copy (select tmp_189.id as ID,company_name as 公司名称, p_name as 公司负责人名称,case_code as 案号,execute_court_name as 执行法院,
execute_money as 执行标的,case_state as 案件执行状态,party_card_num as 身份证号码/组织机构代码,case_create_time as 立案时间,
case_code_cleaned as 案件号清洗字段,case_status as 案件公示状态
  from tmp_189 left join court_zhixing_new on tmp_189.company_name = court_zhixing_new.company_name)
to '/home/scdata/tmp/dianxin_com_zhixing.csv'  with (FORMAT csv,DELIMITER ',',escape '\',header true,quote '"',force_quote *,encoding 'UTF-8');

注意:sql语句报错,字段引用模糊,指定表名.字段;还有就是重命名表名不需要加引号,tmp_189.id as ID,而不是tmp_189.id as 'ID'

错误做法:把全表导出来,再用spark join ;其实直接在数据库中操作,直接在数据库中join,然后将表导出来;

猜你喜欢

转载自blog.csdn.net/sinat_26566137/article/details/82255620
今日推荐