| Interface and Description |
|---|
| org.apache.flink.table.sinks.AppendStreamTableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
org.apache.flink.table.api.bridge.java.BatchTableEnvironment
BatchTableEnvironment will be dropped in Flink 1.14 because it only supports
the old planner. Use the unified TableEnvironment instead, which supports both batch
and streaming. More advanced operations previously covered by the DataSet API can now use the
DataStream API in BATCH execution mode. |
| org.apache.flink.table.sinks.BatchTableSink
use
OutputFormatTableSink instead. |
| org.apache.flink.table.factories.BatchTableSinkFactory
This interface has been replaced by
DynamicTableSinkFactory. The new
interface creates instances of DynamicTableSink and only works with the Blink
planner. See FLIP-95 for more information. |
| org.apache.flink.table.sources.BatchTableSource
use
InputFormatTableSource instead. |
| org.apache.flink.table.factories.BatchTableSourceFactory
This interface has been replaced by
DynamicTableSourceFactory. The new
interface creates instances of DynamicTableSource and only works with the Blink
planner. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.RetractStreamTableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.sinks.StreamTableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.factories.StreamTableSinkFactory
This interface has been replaced by
DynamicTableSinkFactory. The new
interface creates instances of DynamicTableSink and only works with the Blink
planner. See FLIP-95 for more information. |
| org.apache.flink.table.sources.StreamTableSource
This interface has been replaced by
DynamicTableSource. The new interface
produces internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.factories.StreamTableSourceFactory
This interface has been replaced by
DynamicTableSourceFactory. The new
interface creates instances of DynamicTableSource and only works with the Blink
planner. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.UpsertStreamTableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| Class and Description |
|---|
| org.apache.flink.table.sources.InputFormatTableSource
This interface has been replaced by
DynamicTableSource. The new interface
produces internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.descriptors.OldCsv
Use the RFC-compliant
Csv format in the dedicated flink-formats/flink-csv
module instead when writing to Kafka. |
| org.apache.flink.table.descriptors.OldCsvValidator
Use the RFC-compliant
Csv format in the dedicated flink-formats/flink-csv
module instead. |
| org.apache.flink.table.sinks.OutputFormatTableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| Method and Description |
|---|
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.connect(ConnectorDescriptor)
The SQL
CREATE TABLE DDL is richer than this part of the API. This method
might be refactored in the next versions. Please use executeSql(ddl) to register a table instead. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.connect(ConnectorDescriptor)
The SQL
CREATE TABLE DDL is richer than this part of the API. This method
might be refactored in the next versions. Please use executeSql(ddl) to register a table instead. |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamExecutionEnvironment, TableConfig)
Use
StreamTableEnvironment.create(StreamExecutionEnvironment) and TableEnvironment.getConfig() for
manipulating TableConfig. |
org.apache.flink.table.factories.StreamTableSinkFactory.createStreamTableSink(Map<String, String>)
Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.StreamTableSourceFactory.createStreamTableSource(Map<String, String>)
Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.createTemporaryView(String, DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.createTemporaryView(String, DataStream<T>, String) |
| org.apache.flink.table.descriptors.OldCsv.deriveSchema()
Derivation format schema from table's schema is the default behavior now. So
there is no need to explicitly declare to derive schema.
|
| org.apache.flink.table.descriptors.SchemaValidator.deriveTableSinkSchema(DescriptorProperties)
This method combines two separate concepts of table schema and field mapping.
This should be split into two methods once we have support for the corresponding
interfaces (see FLINK-9870).
|
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.execute(String)
Use
StreamExecutionEnvironment.execute(String) instead or directly call
the execute methods of the Table API such as TableEnvironment.executeSql(String). |
org.apache.flink.table.descriptors.OldCsv.field(String, DataType)
OldCsv supports derive schema from table schema by default, it is no
longer necessary to explicitly declare the format schema. This method will be removed in
the future. |
org.apache.flink.table.descriptors.OldCsv.field(String, String)
OldCsv supports derive schema from table schema by default, it is no
longer necessary to explicitly declare the format schema. This method will be removed in
the future. |
| org.apache.flink.table.sources.CsvTableSource.Builder.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type
system. It is recommended to use
CsvTableSource.Builder.field(String, DataType) instead which uses
the new type system based on DataTypes. Please make sure to use either the
old or the new type system consistently to avoid unintended behavior. See the website
documentation for more information. |
org.apache.flink.table.descriptors.OldCsv.field(String, TypeInformation<?>)
OldCsv supports derive schema from table schema by default, it is no
longer necessary to explicitly declare the format schema. This method will be removed in
the future. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.fromDataSet(DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.fromDataStream(DataStream<T>, String) |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.registerDataSet(String, DataSet<T>) |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.registerDataSet(String, DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerDataStream(String, DataStream<T>) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerDataStream(String, DataStream<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerFunction(String, AggregateFunction<T, ACC>)
Use
TableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction) instead.
Please note that the new method also uses the new type system and reflective extraction
logic. It might be necessary to update the function implementation as well. See the
documentation of AggregateFunction for more information on the new function
design. |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerFunction(String, TableAggregateFunction<T, ACC>)
Use
TableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction) instead.
Please note that the new method also uses the new type system and reflective extraction
logic. It might be necessary to update the function implementation as well. See the
documentation of TableAggregateFunction for more information on the new function
design. |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerFunction(String, TableFunction<T>)
Use
TableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction) instead.
Please note that the new method also uses the new type system and reflective extraction
logic. It might be necessary to update the function implementation as well. See the
documentation of TableFunction for more information on the new function design. |
org.apache.flink.table.descriptors.OldCsv.schema(TableSchema)
OldCsv supports derive schema from table schema by default, it is no
longer necessary to explicitly declare the format schema. This method will be removed in
the future. |
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.