SPARK resource scheduling source summary

  1. Executor dispersed start in the cluster, is conducive to task calculated data localization

  1. By default (when submitting the task is not set --executor-cores option), each Worker for the current Application start a Executor, the Executor will use all the cores of the Worker and 1G memory

  1. If you want to start more time on the Executor Worker, Application submitted to add this option --executor-cores

  1. Not set --total-executor-cores By default, a cluster Spark Application will use all of the cores

Guess you like

Origin www.cnblogs.com/xiangyuguan/p/11241130.html