package v2
- Alphabetic
- Public
- All
Type Members
-
case class
DataSourceV2Relation(table: Table, output: Seq[AttributeReference], catalog: Option[CatalogPlugin], identifier: Option[Identifier], options: CaseInsensitiveStringMap) extends LogicalPlan with LeafNode with MultiInstanceRelation with NamedRelation with ExposesMetadataColumns with Product with Serializable
A logical plan representing a data source v2 table.
A logical plan representing a data source v2 table.
- table
The table that this relation represents.
- output
the output attributes of this relation.
- catalog
catalogPlugin for the table. None if no catalog is specified.
- identifier
the identifier for the table. None if no identifier is defined.
- options
The options for this table operation. It's used to create fresh org.apache.spark.sql.connector.read.ScanBuilder and org.apache.spark.sql.connector.write.WriteBuilder.
-
case class
DataSourceV2ScanRelation(relation: DataSourceV2Relation, scan: Scan, output: Seq[AttributeReference], keyGroupedPartitioning: Option[Seq[Expression]] = None, ordering: Option[Seq[SortOrder]] = None) extends LogicalPlan with LeafNode with NamedRelation with Product with Serializable
A logical plan for a DSv2 table with a scan already created.
A logical plan for a DSv2 table with a scan already created.
This is used in the optimizer to push filters and projection down before conversion to physical plan. This ensures that the stats that are used by the optimizer account for the filters and projection that will be pushed down.
- relation
- scan
a DSv2 Scan
- output
the output attributes of this relation
- keyGroupedPartitioning
if set, the partitioning expressions that are used to split the rows in the scan across different partitions
- ordering
if set, the ordering provided by the scan
-
case class
StreamingDataSourceV2Relation(output: Seq[Attribute], scan: Scan, stream: SparkDataStream, catalog: Option[CatalogPlugin], identifier: Option[Identifier], startOffset: Option[Offset] = None, endOffset: Option[Offset] = None) extends LogicalPlan with LeafNode with MultiInstanceRelation with Product with Serializable
A specialization of DataSourceV2Relation with the streaming bit set to true.
A specialization of DataSourceV2Relation with the streaming bit set to true.
Note that, this plan has a mutable reader, so Spark won't apply operator push-down for this plan, to avoid making the plan mutable. We should consolidate this plan and DataSourceV2Relation after we figure out how to apply operator push-down for streaming data sources.
Value Members
- object DataSourceV2Implicits
- object DataSourceV2Relation extends Serializable