val rdd = sc.parallelize(0 to 9) sc.setLocalProperty("spark.scheduler.pool", "myPool") // these two jobs (one per action) will run in the myPool pool rdd.count rdd.collect sc.setLocalProperty("spark.scheduler.pool", null) // this job will run in the default pool rdd.count
Local Properties — Creating Logical Job Groups
The purpose of local properties concept is to create logical groups of jobs by means of properties that (regardless of the threads used to submit the jobs) makes the separate jobs launched from different threads belong to a single logical group.
You can set a local property that will affect Spark jobs submitted from a thread, such as the Spark fair scheduler pool. You can use your own custom properties. The properties are propagated through to worker tasks and can be accessed there via TaskContext.getLocalProperty.
Propagating local properties to workers starts when
|Local properties is used to group jobs into pools in FAIR job scheduler by spark.scheduler.pool per-thread property and in SQLExecution.withNewExecutionId Helper Methods|
A common use case for the local property concept is to set a local property in a thread, say spark.scheduler.pool, after which all jobs submitted within the thread will be grouped, say into a pool by FAIR job scheduler.
localProperties is a
protected[spark] property of a SparkContext that are the properties through which you can create logical job groups.
|Read up on Java’s java.lang.InheritableThreadLocal.|
setLocalProperty(key: String, value: String): Unit
key local property to
getLocalProperty(key: String): String
getLocalProperty gets a local property by
key in this thread. It returns
key is missing.
getLocalProperties is a
private[spark] method that gives access to localProperties.
setLocalProperties(props: Properties): Unit
setLocalProperties is a
private[spark] method that sets
props as localProperties.