Job aborted due to stage failure: Task 20 in stage 3.0 failed 1 times, most recent failure:问题

今天执行了一个spark,调整了启动参数
正常运行到一半时报错:

Job aborted due to stage failure: Task 20 in stage 3.0 failed 1 times, most recent failure: Lost task 20.0 in stage 3.0 (TID 240, localhost, executor driver): com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 60000, active 20, maxActive 20, creating 0	at com.alibaba.druid.pool.DruidDataSource.getConnectionInternal(DruidDataSource.java:1619)
	at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:1337)
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1317)
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1307)
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:109)
	at com.aisino.util.DataSourceUtil.getConnection(DataSourceUtil.java:48)
	at com.aisino.service.DwdDataService$$anonfun$monthlyStatistics_1$1.apply(DwdDataService.scala:67)
	at 

查找问题,在–executor-memory 调小后运行正常

Guess you like

Origin blog.csdn.net/weixin_41772761/article/details/113939098