LocalSparkCluster — Single-JVM Spark Standalone Cluster

LocalSparkCluster is responsible for local-cluster master URL.

Note
local-cluster master URL matches local-cluster[numWorkers,coresPerWorker,memoryPerWorker] pattern where numWorkers, coresPerWorker and memoryPerWorker are all numbers separated by the comma.

LocalSparkCluster can be particularly useful to test distributed operation and fault recovery without spinning up a lot of processes.

LocalSparkCluster is created when SparkContext is created for local-cluster master URL (and so requested to create the SchedulerBackend and the TaskScheduler).

Table 1. LocalSparkCluster’s Internal Properties (e.g. Registries, Counters and Flags)
Name Description

localHostname

FIXME

Used when…​FIXME

masterRpcEnvs

FIXME

Used when…​FIXME

workerRpcEnvs

FIXME

Used when…​FIXME

Tip

Enable INFO logging level for org.apache.spark.deploy.LocalSparkCluster logger to see what happens inside.

Add the following line to conf/log4j.properties:

log4j.logger.org.apache.spark.deploy.LocalSparkCluster=INFO

Refer to Logging.

Creating LocalSparkCluster Instance

LocalSparkCluster takes the following when created:

  • Number of workers

  • CPU cores per worker

  • Memory per worker

  • SparkConf

LocalSparkCluster initializes the internal registries and counters.

start Method

start(): Array[String]

start…​FIXME

Note
start is used when…​FIXME

stop Method

stop(): Unit

stop…​FIXME

Note
stop is used when…​FIXME

results matching ""

    No results matching ""