LocalSchedulerBackend acts as a "cluster manager" for local mode to offer resources on the single worker it manages, i.e. it calls
offers being a single-element collection with WorkerOffer.
|WorkerOffer represents a resource offer with CPU cores available on an executor.|
LocalSchedulerBackend starts up, it registers a new RpcEndpoint called LocalSchedulerBackendEndpoint that is backed by LocalEndpoint. This is announced on LiveListenerBus as
driver (using SparkListenerExecutorAdded message).
The application ids are in the format of
local-[current time millis].
The default parallelism is controlled using spark.default.parallelism property.