-
Executor dispersed start in the cluster, is conducive to task calculated data localization
-
By default (when submitting the task is not set --executor-cores option), each Worker for the current Application start a Executor, the Executor will use all the cores of the Worker and 1G memory
-
If you want to start more time on the Executor Worker, Application submitted to add this option --executor-cores
-
Not set --total-executor-cores By default, a cluster Spark Application will use all of the cores