O endereço principal deste artigo:
https://blog.csdn.net/u010003835/article/details/105334641
Tabela de teste e dados de teste
+----------------------------------------------------+
| createtab_stmt |
+----------------------------------------------------+
| CREATE TABLE `datacube_salary_org`( |
| `company_name` string COMMENT '????', |
| `dep_name` string COMMENT '????', |
| `user_id` bigint COMMENT '??id', |
| `user_name` string COMMENT '????', |
| `salary` decimal(10,2) COMMENT '??', |
| `create_time` date COMMENT '????', |
| `update_time` date COMMENT '????') |
| PARTITIONED BY ( |
| `pt` string COMMENT '????') |
| ROW FORMAT SERDE |
| 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
| WITH SERDEPROPERTIES ( |
| 'field.delim'=',', |
| 'serialization.format'=',') |
| STORED AS INPUTFORMAT |
| 'org.apache.hadoop.mapred.TextInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
| LOCATION |
| 'hdfs://cdh-manager:8020/user/hive/warehouse/data_warehouse_test.db/datacube_salary_org' |
| TBLPROPERTIES ( |
| 'transient_lastDdlTime'='1586310488') |
+----------------------------------------------------+
+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
| datacube_salary_org.company_name | datacube_salary_org.dep_name | datacube_salary_org.user_id | datacube_salary_org.user_name | datacube_salary_org.salary | datacube_salary_org.create_time | datacube_salary_org.update_time | datacube_salary_org.pt |
+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
| s.zh | engineer | 1 | szh | 28000.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| s.zh | engineer | 2 | zyq | 26000.00 | 2020-04-03 | 2020-04-03 | 20200405 |
| s.zh | tester | 3 | gkm | 20000.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| x.qx | finance | 4 | pip | 13400.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| x.qx | finance | 5 | kip | 24500.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| x.qx | finance | 6 | zxxc | 13000.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| x.qx | kiccp | 7 | xsz | 8600.00 | 2020-04-07 | 2020-04-07 | 20200405 |
| s.zh | engineer | 1 | szh | 28000.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| s.zh | engineer | 2 | zyq | 26000.00 | 2020-04-03 | 2020-04-03 | 20200406 |
| s.zh | tester | 3 | gkm | 20000.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| x.qx | finance | 4 | pip | 13400.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| x.qx | finance | 5 | kip | 24500.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| x.qx | finance | 6 | zxxc | 13000.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| x.qx | kiccp | 7 | xsz | 8600.00 | 2020-04-07 | 2020-04-07 | 20200406 |
| s.zh | enginer | 1 | szh | 28000.00 | 2020-04-07 | 2020-04-07 | 20200407 |
| s.zh | enginer | 2 | zyq | 26000.00 | 2020-04-03 | 2020-04-03 | 20200407 |
| s.zh | tester | 3 | gkm | 20000.00 | 2020-04-07 | 2020-04-07 | 20200407 |
| x.qx | finance | 4 | pip | 13400.00 | 2020-04-07 | 2020-04-07 | 20200407 |
| x.qx | finance | 5 | kip | 24500.00 | 2020-04-07 | 2020-04-07 | 20200407 |
| x.qx | finance | 6 | zxxc | 13000.00 | 2020-04-07 | 2020-04-07 | 20200407 |
| x.qx | kiccp | 7 | xsz | 8600.00 | 2020-04-07 | 2020-04-07 | 20200407 |
+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
Cena 1. Problema de redução de redundância
1) UNIÃO - a diferença entre UNIÃO TUDO, como escolher
2) DISTINTA maneira alternativa GRUPO POR
1) UNIÃO - a diferença entre UNIÃO TUDO, como escolher
Observe que UNION ALL e UNION são diferentes no SQL,
UNION ALL não deduplicará os dados mesclados
UNION desduplicará os dados mesclados
Exemplos:
EXPLAIN
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200405'
UNION / UNION ALL
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200406'
;
EXPLIQUE o resultado de UNION ALL
INFO : Starting task [Stage-3:EXPLAIN] in serial mode
INFO : Completed executing command(queryId=hive_20200409232517_c76f15cf-20cf-415d-8086-123953fffc75); Time taken: 0.006 seconds
INFO : OK
+----------------------------------------------------+
| Explain |
+----------------------------------------------------+
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200405') (type: boolean) |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200406') (type: boolean) |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
| Processor Tree: |
| ListSink |
| |
+----------------------------------------------------+
UNIÃO EXPLICA Resultados
INFO : Starting task [Stage-3:EXPLAIN] in serial mode
INFO : Completed executing command(queryId=hive_20200409232436_8c1754b6-36ef-4846-a6db-719211b6b6a8); Time taken: 0.022 seconds
INFO : OK
+----------------------------------------------------+
| Explain |
+----------------------------------------------------+
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200405') (type: boolean) |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| keys: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| sort order: ++++ |
| Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200406') (type: boolean) |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| keys: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| sort order: ++++ |
| Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Operator Tree: |
| Group By Operator |
| keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: bigint), KEY._col3 (type: string) |
| mode: mergepartial |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 377 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 1 Data size: 377 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
| Processor Tree: |
| ListSink |
| |
+----------------------------------------------------+
Comparando os dois resultados de EXPLAIN, não é difícil descobrir que o UNION terá um processo de redução adicional. Não é difícil razão pela qual , na ausência de demanda pesada ir para baixo , usar UNION ALL em vez de UNION.
Além disso, diz-se que usar UNION ALL e, em seguida, usar GROUP BY para fazer o efeito de desduplicação será mais eficiente que UNION.
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200405'
UNION
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200406'
;
Para
SELECT
company_name
,dep_name
,user_id
,user_name
FROM
(
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200405'
UNION ALL
SELECT
company_name
,dep_name
,user_id
,user_name
FROM datacube_salary_org
WHERE pt = '20200406'
) tmp
GROUP BY
company_name
,dep_name
,user_id
,user_name
;
Eu acho que a eficiência é consistente, veja o resultado EXPLAIN do método aprimorado
INFO : Starting task [Stage-3:EXPLAIN] in serial mode
INFO : Completed executing command(queryId=hive_20200410020255_57b936d7-ffde-41a6-af6e-3d0dc0d3a007); Time taken: 0.015 seconds
INFO : OK
+----------------------------------------------------+
| Explain |
+----------------------------------------------------+
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200405') (type: boolean) |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 342 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| keys: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| sort order: ++++ |
| Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| TableScan |
| alias: datacube_salary_org |
| filterExpr: (pt = '20200406') (type: boolean) |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint), user_name (type: string) |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 412 Basic stats: COMPLETE Column stats: NONE |
| Union |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| keys: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| sort order: ++++ |
| Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: bigint), _col3 (type: string) |
| Statistics: Num rows: 2 Data size: 754 Basic stats: COMPLETE Column stats: NONE |
| Reduce Operator Tree: |
| Group By Operator |
| keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: bigint), KEY._col3 (type: string) |
| mode: mergepartial |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 1 Data size: 377 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 1 Data size: 377 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
| Processor Tree: |
| ListSink |
| |
+----------------------------------------------------+
Não há diferença entre os dois métodos de EXPLAIN, portanto, considera-se que não há otimização
Tempo de contraste (pequena magnitude de dados)
UNIÃO TUDO 再 GRUPO POR
5.2s demorados
INFO : Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
INFO : 2020-04-10 02:06:37,784 Stage-1 map = 0%, reduce = 0%
INFO : 2020-04-10 02:06:44,970 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.67 sec
INFO : 2020-04-10 02:06:49,094 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.23 sec
INFO : 2020-04-10 02:06:55,291 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.2 sec
INFO : MapReduce Total cumulative CPU time: 5 seconds 200 msec
INFO : Ended Job = job_1586423165261_0005
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 2 Reduce: 1 Cumulative CPU: 5.2 sec HDFS Read: 21685 HDFS Write: 304 SUCCESS
INFO : Total MapReduce CPU Time Spent: 5 seconds 200 msec
INFO : Completed executing command(queryId=hive_20200410020629_c216e339-181a-4b52-8a59-ac527963e32b); Time taken: 28.112 seconds
INFO : OK
+---------------+-----------+----------+------------+
| company_name | dep_name | user_id | user_name |
+---------------+-----------+----------+------------+
| s.zh | engineer | 1 | szh |
| s.zh | engineer | 2 | zyq |
| s.zh | tester | 3 | gkm |
| x.qx | finance | 4 | pip |
| x.qx | finance | 5 | kip |
| x.qx | finance | 6 | zxxc |
| x.qx | kiccp | 7 | xsz |
+---------------+-----------+----------+------------+
7 rows selected (28.31 seconds)
UNIÃO
INFO : Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
INFO : 2020-04-10 02:09:24,102 Stage-1 map = 0%, reduce = 0%
INFO : 2020-04-10 02:09:31,308 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.78 sec
INFO : 2020-04-10 02:09:35,427 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.39 sec
INFO : 2020-04-10 02:09:41,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.04 sec
INFO : MapReduce Total cumulative CPU time: 5 seconds 40 msec
INFO : Ended Job = job_1586423165261_0006
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 2 Reduce: 1 Cumulative CPU: 5.04 sec HDFS Read: 21813 HDFS Write: 304 SUCCESS
INFO : Total MapReduce CPU Time Spent: 5 seconds 40 msec
INFO : Completed executing command(queryId=hive_20200410020915_477574a0-4763-4717-8f9c-25d9f4b04706); Time taken: 27.033 seconds
INFO : OK
+-------------------+---------------+--------------+----------------+
| _u2.company_name | _u2.dep_name | _u2.user_id | _u2.user_name |
+-------------------+---------------+--------------+----------------+
| s.zh | engineer | 1 | szh |
| s.zh | engineer | 2 | zyq |
| s.zh | tester | 3 | gkm |
| x.qx | finance | 4 | pip |
| x.qx | finance | 5 | kip |
| x.qx | finance | 6 | zxxc |
| x.qx | kiccp | 7 | xsz |
+-------------------+---------------+--------------+----------------+
Após a comparação acima, não pode ser considerada nenhuma diferença
2) DISTINTA maneira alternativa GRUPO POR
No cenário real de deduplicação, escolheremos DISTINCT para fazer a deduplicação.
No entanto, em cenários reais, a eficiência da seleção de GROUP BY será maior. Abaixo, conduzimos o experimento.
Primeiro, escolhemos o método ineficiente COUNT (DISTINCT)
SQL
SELECT
COUNT(DISTINCT company_name, dep_name, user_id)
FROM datacube_salary_org
;
EXPLIQUE resultados
INFO : Starting task [Stage-2:EXPLAIN] in serial mode
INFO : Completed executing command(queryId=hive_20200410023914_3ed9bbfc-9b01-4351-b559-a797b8ae2c85); Time taken: 0.007 seconds
INFO : OK
+----------------------------------------------------+
| Explain |
+----------------------------------------------------+
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: datacube_salary_org |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint) |
| outputColumnNames: company_name, dep_name, user_id |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| aggregations: count(DISTINCT company_name, dep_name, user_id) |
| keys: company_name (type: string), dep_name (type: string), user_id (type: bigint) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2, _col3 |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint) |
| sort order: +++ |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Reduce Operator Tree: |
| Group By Operator |
| aggregations: count(DISTINCT KEY._col0:0._col0, KEY._col0:0._col1, KEY._col0:0._col2) |
| mode: mergepartial |
| outputColumnNames: _col0 |
| Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
| Processor Tree: |
| ListSink |
| |
+----------------------------------------------------+
Tempo de execução de dados pequeno
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2020-04-10 03:06:39,390 Stage-1 map = 0%, reduce = 0%
INFO : 2020-04-10 03:06:46,735 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.94 sec
INFO : 2020-04-10 03:06:52,969 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.72 sec
INFO : MapReduce Total cumulative CPU time: 4 seconds 720 msec
INFO : Ended Job = job_1586423165261_0010
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 4.72 sec HDFS Read: 12863 HDFS Write: 101 SUCCESS
INFO : Total MapReduce CPU Time Spent: 4 seconds 720 msec
INFO : Completed executing command(queryId=hive_20200410030629_7b6df91e-a78a-4bc1-b558-abbb8d506596); Time taken: 24.023 seconds
INFO : OK
+------+
| _c0 |
+------+
| 9 |
+------+
====================
Escolhemos o método GROUP BY eficiente
SQL
SELECT COUNT(1)
FROM (
SELECT
company_name
,dep_name
,user_id
FROM datacube_salary_org
GROUP BY
company_name
,dep_name
,user_id
) AS tmp
;
EXPLIQUE resultados
INFO : Starting task [Stage-3:EXPLAIN] in serial mode
INFO : Completed executing command(queryId=hive_20200410024128_fc60e84d-be8d-4b4d-aad8-a53466fa1559); Time taken: 0.005 seconds
INFO : OK
+----------------------------------------------------+
| Explain |
+----------------------------------------------------+
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-2 depends on stages: Stage-1 |
| Stage-0 depends on stages: Stage-2 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: datacube_salary_org |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| expressions: company_name (type: string), dep_name (type: string), user_id (type: bigint) |
| outputColumnNames: company_name, dep_name, user_id |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| keys: company_name (type: string), dep_name (type: string), user_id (type: bigint) |
| mode: hash |
| outputColumnNames: _col0, _col1, _col2 |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: bigint) |
| sort order: +++ |
| Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: bigint) |
| Statistics: Num rows: 7 Data size: 340 Basic stats: COMPLETE Column stats: NONE |
| Reduce Operator Tree: |
| Group By Operator |
| keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: bigint) |
| mode: mergepartial |
| outputColumnNames: _col0, _col1, _col2 |
| Statistics: Num rows: 3 Data size: 145 Basic stats: COMPLETE Column stats: NONE |
| Select Operator |
| Statistics: Num rows: 3 Data size: 145 Basic stats: COMPLETE Column stats: NONE |
| Group By Operator |
| aggregations: count(1) |
| mode: hash |
| outputColumnNames: _col0 |
| Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe |
| |
| Stage: Stage-2 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| Reduce Output Operator |
| sort order: |
| Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE |
| value expressions: _col0 (type: bigint) |
| Reduce Operator Tree: |
| Group By Operator |
| aggregations: count(VALUE._col0) |
| mode: mergepartial |
| outputColumnNames: _col0 |
| Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
| Processor Tree: |
| ListSink |
| |
+----------------------------------------------------+
Tempo de execução de dados pequeno
INFO : Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
INFO : 2020-04-10 03:09:34,476 Stage-2 map = 0%, reduce = 0%
INFO : 2020-04-10 03:09:40,662 Stage-2 map = 100%, reduce = 0%
INFO : 2020-04-10 03:09:47,850 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 4.3 sec
INFO : MapReduce Total cumulative CPU time: 4 seconds 300 msec
INFO : Ended Job = job_1586423165261_0014
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 4.11 sec HDFS Read: 11827 HDFS Write: 114 SUCCESS
INFO : Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 4.3 sec HDFS Read: 5111 HDFS Write: 101 SUCCESS
INFO : Total MapReduce CPU Time Spent: 8 seconds 410 msec
INFO : Completed executing command(queryId=hive_20200410030859_f89c708b-e76a-44fc-9e99-a6f9a404200f); Time taken: 49.78 seconds
INFO : OK
+------+
| _c0 |
+------+
| 9 |
Princípio da otimização
Vamos falar sobre por que a eficiência de GROUP BY e COUNT é melhor que COUNT direta (DISTINCT ...) em grandes conjuntos de dados.
Como COUNT (DISTINCT ...), as colunas relevantes serão combinadas em uma chave e passadas para o Redutor. Ou seja, conte (DISTINCT KEY._col0: 0._col0, KEY._col0: 0._col1, KEY._col0: 0._col2) | Isso precisa ser feito em um redutor para concluir a classificação e desduplicação completas.
Primeiro, GROUP BY e, em seguida, COUNT, em seguida, GROUP BY pode distribuir KEY diferente para vários redutores e concluir a desduplicação no processo GROUP BY. No momento, quando a desduplicação não coloca os dados em um redutor, ela tira proveito da distribuição. Essa desduplicação é mais eficiente. No estágio COUNT da próxima etapa, a CHAVE após a desduplicação GROUP BY é reproduzida novamente na etapa anterior para o cálculo estatístico.
Portanto, na grande quantidade de dados, GROUP BY first e COUNT é mais eficiente que COUNT (DISTINCT).
Vamos comparar os resultados da operação acima
Em EXPLAIN: COUNT (DISTINCT), há menos estágios que GROUP BY antes de COUNT. Porque GROUP BY já é um MR STAGE e COUNT é outro STAGE.
Tempo de execução: você pode ver que não há diferença entre os dois, mesmo o tempo total de COUNT (DISTINCT) é menor que GROUP BY e depois COUNT. Isso ocorre porque a execução de um STAGE exige a aplicação de recursos, a abertura de recursos e o custo de tempo. Portanto, em uma pequena quantidade de dados, GROUP BY e COUNT time são mais que COUNT (DISTINCT), que são gastos principalmente no tempo de solicitação de recursos e criação de contêineres.
E o tempo total de execução COUNT (DISTINCT) é menor que GROUP BY antes de COUNT
O motivo dos resultados acima ainda se deve ao tamanho do conjunto de dados. Ou seja, o custo de tempo da classificação global de um redutor é comparado com o custo de dividir recursos para várias etapas do trabalho! ! !
Portanto, fazemos escolhas razoáveis de acordo com o volume de dados real! ! ! !