Configuration Properties
This page contains the configuration properties of the Hive data source.
Configuration Property |
---|
Controls whether to use the built-in Parquet reader and writer for Hive tables with the parquet storage format (instead of Hive SerDe). Default: Internally, this property enables RelationConversions logical rule to convert HiveTableRelations to HadoopFsRelation |
Enables trying to merge possibly different but compatible Parquet schemas in different Parquet data files. Default: This configuration is only effective when spark.sql.hive.convertMetastoreParquet is enabled. |
Enables metastore partition management for file source tables (filesource partition management). This includes both datasource and converted Hive tables. Default: When enabled ( Use SQLConf.manageFilesourcePartitions method to access the current value. |
Location of the jars that should be used to create a HiveClientImpl. Default: Supported locations:
|
Comma-separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive. Default: An example of classes that should be shared are:
|
Version of the Hive metastore (and the client classes and jars). Default: 1.2.1 Supported versions range from 0.12.0 up to and including 2.3.3 |
When enabled ( Default: This only affects Hive tables that are not converted to filesource relations (based on spark.sql.hive.convertMetastoreParquet and spark.sql.hive.convertMetastoreOrc properties). Use SQLConf.metastorePartitionPruning method to access the current value. |