@Generated(value="software.amazon.awssdk:codegen") public final class KafkaSettings extends Object implements SdkPojo, Serializable, ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
| Modifier and Type | Class and Description |
|---|---|
static interface |
KafkaSettings.Builder |
| Modifier and Type | Method and Description |
|---|---|
String |
broker()
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
|
static KafkaSettings.Builder |
builder() |
boolean |
equals(Object obj) |
boolean |
equalsBySdkFields(Object obj) |
<T> Optional<T> |
getValueForField(String fieldName,
Class<T> clazz) |
int |
hashCode() |
Boolean |
includeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
Boolean |
includeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint.
|
Boolean |
includePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. |
Boolean |
includeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table, drop-table, add-column, drop-column, and
rename-column. |
Boolean |
includeTransactionDetails()
Provides detailed transaction information from the source database.
|
MessageFormatValue |
messageFormat()
The output format for the records created on the endpoint.
|
String |
messageFormatAsString()
The output format for the records created on the endpoint.
|
Integer |
messageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
Boolean |
noHexPrefix()
Set this optional parameter to
true to avoid adding a '0x' prefix to raw data in hexadecimal format. |
Boolean |
partitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is
primary-key-type. |
KafkaSaslMechanism |
saslMechanism()
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
String |
saslMechanismAsString()
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
String |
saslPassword()
The secure password you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
String |
saslUsername()
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
List<SdkField<?>> |
sdkFields() |
KafkaSecurityProtocol |
securityProtocol()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
String |
securityProtocolAsString()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
static Class<? extends KafkaSettings.Builder> |
serializableBuilderClass() |
String |
sslCaCertificateArn()
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect
to your Kafka target endpoint.
|
String |
sslClientCertificateArn()
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
|
String |
sslClientKeyArn()
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
|
String |
sslClientKeyPassword()
The password for the client private key used to securely connect to a Kafka target endpoint.
|
KafkaSslEndpointIdentificationAlgorithm |
sslEndpointIdentificationAlgorithm()
Sets hostname verification for the certificate.
|
String |
sslEndpointIdentificationAlgorithmAsString()
Sets hostname verification for the certificate.
|
KafkaSettings.Builder |
toBuilder() |
String |
topic()
The topic to which you migrate the data.
|
String |
toString()
Returns a string representation of this object.
|
clone, finalize, getClass, notify, notifyAll, wait, wait, waitcopypublic final String broker()
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
Specify each broker location in the form broker-hostname-or-ip:port . For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a
list of broker locations, see Using Apache Kafka as a target for
Database Migration Service in the Database Migration Service User Guide.
broker-hostname-or-ip:port . For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of
specifying a list of broker locations, see Using Apache Kafka as a
target for Database Migration Service in the Database Migration Service User Guide.public final String topic()
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic" as the migration topic.
"kafka-default-topic" as the migration topic.public final MessageFormatValue messageFormat()
The output format for the records created on the endpoint. The message format is JSON (default) or
JSON_UNFORMATTED (a single line with no tab).
If the service returns an enum value that is not available in the current SDK version, messageFormat
will return MessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from messageFormatAsString().
JSON
(default) or JSON_UNFORMATTED (a single line with no tab).MessageFormatValuepublic final String messageFormatAsString()
The output format for the records created on the endpoint. The message format is JSON (default) or
JSON_UNFORMATTED (a single line with no tab).
If the service returns an enum value that is not available in the current SDK version, messageFormat
will return MessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from messageFormatAsString().
JSON
(default) or JSON_UNFORMATTED (a single line with no tab).MessageFormatValuepublic final Boolean includeTransactionDetails()
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id, previous transaction_id, and
transaction_record_id (the record offset within a transaction). The default is false.
transaction_id, previous
transaction_id, and transaction_record_id (the record offset within a
transaction). The default is false.public final Boolean includePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. The default is false.
schema-table-type. The default is false.public final Boolean partitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is primary-key-type.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false.
primary-key-type. Doing this increases data distribution among Kafka partitions. For
example, suppose that a SysBench schema has thousands of tables and each table has only limited range for
a primary key. In this case, the same primary key is sent from thousands of tables to the same partition,
which causes throttling. The default is false.public final Boolean includeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table, drop-table, add-column, drop-column, and
rename-column. The default is false.
rename-table, drop-table, add-column, drop-column,
and rename-column. The default is false.public final Boolean includeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false.
false.public final Integer messageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
public final Boolean includeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint. The default is false.
false.public final KafkaSecurityProtocol securityProtocol()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl
requires SaslUsername and SaslPassword.
If the service returns an enum value that is not available in the current SDK version, securityProtocol
will return KafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is
available from securityProtocolAsString().
ssl-encryption, ssl-authentication, and sasl-ssl.
sasl-ssl requires SaslUsername and SaslPassword.KafkaSecurityProtocolpublic final String securityProtocolAsString()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl
requires SaslUsername and SaslPassword.
If the service returns an enum value that is not available in the current SDK version, securityProtocol
will return KafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is
available from securityProtocolAsString().
ssl-encryption, ssl-authentication, and sasl-ssl.
sasl-ssl requires SaslUsername and SaslPassword.KafkaSecurityProtocolpublic final String sslClientCertificateArn()
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
public final String sslClientKeyArn()
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
public final String sslClientKeyPassword()
The password for the client private key used to securely connect to a Kafka target endpoint.
public final String sslCaCertificateArn()
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
public final String saslUsername()
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
public final String saslPassword()
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
public final Boolean noHexPrefix()
Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format.
For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an
Oracle source to a Kafka target. Use the NoHexPrefix endpoint setting to enable migration of RAW
data type columns without adding the '0x' prefix.
true to avoid adding a '0x' prefix to raw data in hexadecimal
format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format
moving from an Oracle source to a Kafka target. Use the NoHexPrefix endpoint setting to
enable migration of RAW data type columns without adding the '0x' prefix.public final KafkaSaslMechanism saslMechanism()
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0
and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this
parameter to PLAIN.
If the service returns an enum value that is not available in the current SDK version, saslMechanism
will return KafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from saslMechanismAsString().
SCRAM-SHA-512 mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.KafkaSaslMechanismpublic final String saslMechanismAsString()
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0
and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this
parameter to PLAIN.
If the service returns an enum value that is not available in the current SDK version, saslMechanism
will return KafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from saslMechanismAsString().
SCRAM-SHA-512 mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.KafkaSaslMechanismpublic final KafkaSslEndpointIdentificationAlgorithm sslEndpointIdentificationAlgorithm()
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
If the service returns an enum value that is not available in the current SDK version,
sslEndpointIdentificationAlgorithm will return
KafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is
available from sslEndpointIdentificationAlgorithmAsString().
KafkaSslEndpointIdentificationAlgorithmpublic final String sslEndpointIdentificationAlgorithmAsString()
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
If the service returns an enum value that is not available in the current SDK version,
sslEndpointIdentificationAlgorithm will return
KafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is
available from sslEndpointIdentificationAlgorithmAsString().
KafkaSslEndpointIdentificationAlgorithmpublic KafkaSettings.Builder toBuilder()
toBuilder in interface ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>public static KafkaSettings.Builder builder()
public static Class<? extends KafkaSettings.Builder> serializableBuilderClass()
public final boolean equalsBySdkFields(Object obj)
equalsBySdkFields in interface SdkPojopublic final String toString()
Copyright © 2023. All rights reserved.