Tuning of resource allocation SPARK

Allocate more resources: tuning the king, it is to increase and allocate more resources to enhance the performance and speed are obvious

1. What are the resource allocation? executor, cpu per executor, memory per executor, driver memory

2, the allocation of these resources and where? When we are in a production environment, job submission spark, spark-submit shell script used, which adjust the corresponding parameters

3, adjusted to how much, it is the biggest? A principle, resources can be used much, we try to adjust to the maximum size (number of executor, ranging from tens to hundreds; executor memory; executor cpu core)

Guess you like

Origin www.cnblogs.com/xiangyuguan/p/11333960.html