Since the lake upstream data to change the data compression format
given thrift jdbc Interface Query data used when the spark sql
19/07/29 06:12:55 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 (TID 4, s015.test.com, executor 1): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 10300408. To avoid this, increase spark.kryoserializer.buffer.max value.
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19/07/29 06:12:55 INFO scheduler.TaskSetManager: Starting task 1.1 in stage 1.0 (TID 5, s015.test.com, executor 1, partition 1, RACK_LOCAL, 8283 bytes)
19/07/29 06:12:57 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 3, s015.test.com, executor 1): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 10453339. To avoid this, increase spark.kryoserializer.buffer.max value.
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Preliminary analysis specified parameters due to data decompression memory is not enough, need to submit the spark submit
--conf spark.kryoserializer.buffer.max=512m \
--conf spark.kryoserializer.buffer=256m \
And modifying the program (to increase the degree of parallelism)
val resultRdd = hiveContext.sql(sql)
resultRdd.repartition(100).registerTempTable("a")
hiveContext.sql("insert overwrite table table_a select * from a")