Configure Spark Resources

Using Spark on Saagie means running Spark jobs on Kubernetes.

Here is an example of a job submission for a Spark app to a Kubernetes cluster. In this example, the total provisioned cluster would be 3 executors of 4 cores and 3G memory each. In total, it represents 12 CPU cores and 9G of memory.

spark-submit \
 --driver-memory 2G \ (1)
 --class <ClassName of the Spark application to launch> \ (2)
 --conf spark.executor.memory=3G \ (3)
 --conf spark.executor.cores=4 \ (4)
 --conf spark.kubernetes.executor.limit.cores=4 (5)
 --conf spark.executor.instances=3 \ (6)
 {file} (7)


1 --driver-memory 2G is the amount of memory allocated to the driver process of the Spark app.
2 <ClassName of the Spark application to launch> must be replaced with class name of your Spark app.
3 spark.executor.memory is the amount of memory allocated to each executor in the Spark app (request and limit).
4 spark.executor.cores is the number of CPU cores allocated to each executor in the Spark app.
5 spark.kubernetes.executor.limit.cores is the limit of CPU cores that each executor in the Spark app can use on Kubernetes.
6 spark.executor.instances is the number of executor instances to be launched for the Spark app.
7 {file} must be replaced with the path to the JAR file with your Spark app code.

A good practice is to provision between 2 and 4 cores per executor depending on your cluster topology. If you only have nodes with 4 CPUs in your cluster, Kubernetes will have a hard time finding a completely unoccupied node to spawn an executor with 4 cores. In this case, you may want to limit it to 2 cores.


A minimum of 4 GB per executor should ideally be provisioned.

Driver Memory

Unless you are retrieving a large amount of data from the executors to the driver, you do not need to change the default configuration, as the driver’s role is simply to orchestrate the various jobs in your Spark application.

See also

For more information on performance tuning in Spark, how to detect performance issues, and best practices for avoiding slowdowns or bottlenecks in your workflow, read the following articles: