// DEMO Example with inner nodes that should be shown as an inner nested tree of this node
val lines = Seq("SaveIntoDataSourceCommand").toDF("line")
// NOTE: There are two CreatableRelationProviders: jdbc and kafka
// jdbc is simpler to use in spark-shell as it does not need --packages
val url = "jdbc:derby:memory:;databaseName=/tmp/test;create=true"
val requiredOpts = Map("url" -> url, "dbtable" -> "lines")
// Use overwrite SaveMode to make the demo reproducible
import org.apache.spark.sql.SaveMode.Overwrite
lines.write.options(requiredOpts).format("jdbc").mode(Overwrite).save
// Go to web UI's SQL tab and see the last executed query
SaveIntoDataSourceCommand Logical Command
SaveIntoDataSourceCommand
is a logical command that, when executed, FIXME.
SaveIntoDataSourceCommand
is created exclusively when DataSource
is requested to create a logical command for writing (to a CreatableRelationProvider data source).
SaveIntoDataSourceCommand
returns the logical query plan when requested for the inner nodes (that should be shown as an inner nested tree of this node).
SaveIntoDataSourceCommand
redacts the options for the simple description with state prefix.
SaveIntoDataSourceCommand [dataSource], [redacted], [mode]
Executing Logical Command — run
Method
run(
sparkSession: SparkSession): Seq[Row]
Note
|
run is part of RunnableCommand Contract to execute (run) a logical command.
|
run
simply requests the CreatableRelationProvider data source to save the rows of a structured query (a DataFrame).
In the end, run
returns an empty Seq[Row]
(just to follow the signature and please the Scala compiler).
Creating SaveIntoDataSourceCommand Instance
SaveIntoDataSourceCommand
takes the following when created:
-
CreatableRelationProvider data source