@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class JDBCConnectorOptions extends Object implements Serializable, Cloneable, StructuredPojo
Additional connection options for the connector.
Constructor and Description |
---|
JDBCConnectorOptions() |
Modifier and Type | Method and Description |
---|---|
JDBCConnectorOptions |
addDataTypeMappingEntry(String key,
String value)
Add a single DataTypeMapping entry
|
JDBCConnectorOptions |
clearDataTypeMappingEntries()
Removes all the entries added into DataTypeMapping.
|
JDBCConnectorOptions |
clone() |
boolean |
equals(Object obj) |
Map<String,String> |
getDataTypeMapping()
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type.
|
String |
getFilterPredicate()
Extra condition clause to filter data from source.
|
List<String> |
getJobBookmarkKeys()
The name of the job bookmark keys on which to sort.
|
String |
getJobBookmarkKeysSortOrder()
Specifies an ascending or descending sort order.
|
Long |
getLowerBound()
The minimum value of
partitionColumn that is used to decide partition stride. |
Long |
getNumPartitions()
The number of partitions.
|
String |
getPartitionColumn()
The name of an integer column that is used for partitioning.
|
Long |
getUpperBound()
The maximum value of
partitionColumn that is used to decide partition stride. |
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setDataTypeMapping(Map<String,String> dataTypeMapping)
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type.
|
void |
setFilterPredicate(String filterPredicate)
Extra condition clause to filter data from source.
|
void |
setJobBookmarkKeys(Collection<String> jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
|
void |
setJobBookmarkKeysSortOrder(String jobBookmarkKeysSortOrder)
Specifies an ascending or descending sort order.
|
void |
setLowerBound(Long lowerBound)
The minimum value of
partitionColumn that is used to decide partition stride. |
void |
setNumPartitions(Long numPartitions)
The number of partitions.
|
void |
setPartitionColumn(String partitionColumn)
The name of an integer column that is used for partitioning.
|
void |
setUpperBound(Long upperBound)
The maximum value of
partitionColumn that is used to decide partition stride. |
String |
toString()
Returns a string representation of this object.
|
JDBCConnectorOptions |
withDataTypeMapping(Map<String,String> dataTypeMapping)
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type.
|
JDBCConnectorOptions |
withFilterPredicate(String filterPredicate)
Extra condition clause to filter data from source.
|
JDBCConnectorOptions |
withJobBookmarkKeys(Collection<String> jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
|
JDBCConnectorOptions |
withJobBookmarkKeys(String... jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
|
JDBCConnectorOptions |
withJobBookmarkKeysSortOrder(String jobBookmarkKeysSortOrder)
Specifies an ascending or descending sort order.
|
JDBCConnectorOptions |
withLowerBound(Long lowerBound)
The minimum value of
partitionColumn that is used to decide partition stride. |
JDBCConnectorOptions |
withNumPartitions(Long numPartitions)
The number of partitions.
|
JDBCConnectorOptions |
withPartitionColumn(String partitionColumn)
The name of an integer column that is used for partitioning.
|
JDBCConnectorOptions |
withUpperBound(Long upperBound)
The maximum value of
partitionColumn that is used to decide partition stride. |
public void setFilterPredicate(String filterPredicate)
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
filterPredicate
- Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
public String getFilterPredicate()
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
public JDBCConnectorOptions withFilterPredicate(String filterPredicate)
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
filterPredicate
- Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
public void setPartitionColumn(String partitionColumn)
The name of an integer column that is used for partitioning. This option works only when it's included with
lowerBound
, upperBound
, and numPartitions
. This option works the same way
as in the Spark SQL JDBC reader.
partitionColumn
- The name of an integer column that is used for partitioning. This option works only when it's included
with lowerBound
, upperBound
, and numPartitions
. This option works
the same way as in the Spark SQL JDBC reader.public String getPartitionColumn()
The name of an integer column that is used for partitioning. This option works only when it's included with
lowerBound
, upperBound
, and numPartitions
. This option works the same way
as in the Spark SQL JDBC reader.
lowerBound
, upperBound
, and numPartitions
. This option works
the same way as in the Spark SQL JDBC reader.public JDBCConnectorOptions withPartitionColumn(String partitionColumn)
The name of an integer column that is used for partitioning. This option works only when it's included with
lowerBound
, upperBound
, and numPartitions
. This option works the same way
as in the Spark SQL JDBC reader.
partitionColumn
- The name of an integer column that is used for partitioning. This option works only when it's included
with lowerBound
, upperBound
, and numPartitions
. This option works
the same way as in the Spark SQL JDBC reader.public void setLowerBound(Long lowerBound)
The minimum value of partitionColumn
that is used to decide partition stride.
lowerBound
- The minimum value of partitionColumn
that is used to decide partition stride.public Long getLowerBound()
The minimum value of partitionColumn
that is used to decide partition stride.
partitionColumn
that is used to decide partition stride.public JDBCConnectorOptions withLowerBound(Long lowerBound)
The minimum value of partitionColumn
that is used to decide partition stride.
lowerBound
- The minimum value of partitionColumn
that is used to decide partition stride.public void setUpperBound(Long upperBound)
The maximum value of partitionColumn
that is used to decide partition stride.
upperBound
- The maximum value of partitionColumn
that is used to decide partition stride.public Long getUpperBound()
The maximum value of partitionColumn
that is used to decide partition stride.
partitionColumn
that is used to decide partition stride.public JDBCConnectorOptions withUpperBound(Long upperBound)
The maximum value of partitionColumn
that is used to decide partition stride.
upperBound
- The maximum value of partitionColumn
that is used to decide partition stride.public void setNumPartitions(Long numPartitions)
The number of partitions. This value, along with lowerBound
(inclusive) and upperBound
(exclusive), form partition strides for generated WHERE
clause expressions that are used to split
the partitionColumn
.
numPartitions
- The number of partitions. This value, along with lowerBound
(inclusive) and
upperBound
(exclusive), form partition strides for generated WHERE
clause
expressions that are used to split the partitionColumn
.public Long getNumPartitions()
The number of partitions. This value, along with lowerBound
(inclusive) and upperBound
(exclusive), form partition strides for generated WHERE
clause expressions that are used to split
the partitionColumn
.
lowerBound
(inclusive) and
upperBound
(exclusive), form partition strides for generated WHERE
clause
expressions that are used to split the partitionColumn
.public JDBCConnectorOptions withNumPartitions(Long numPartitions)
The number of partitions. This value, along with lowerBound
(inclusive) and upperBound
(exclusive), form partition strides for generated WHERE
clause expressions that are used to split
the partitionColumn
.
numPartitions
- The number of partitions. This value, along with lowerBound
(inclusive) and
upperBound
(exclusive), form partition strides for generated WHERE
clause
expressions that are used to split the partitionColumn
.public List<String> getJobBookmarkKeys()
The name of the job bookmark keys on which to sort.
public void setJobBookmarkKeys(Collection<String> jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
jobBookmarkKeys
- The name of the job bookmark keys on which to sort.public JDBCConnectorOptions withJobBookmarkKeys(String... jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
NOTE: This method appends the values to the existing list (if any). Use
setJobBookmarkKeys(java.util.Collection)
or withJobBookmarkKeys(java.util.Collection)
if you
want to override the existing values.
jobBookmarkKeys
- The name of the job bookmark keys on which to sort.public JDBCConnectorOptions withJobBookmarkKeys(Collection<String> jobBookmarkKeys)
The name of the job bookmark keys on which to sort.
jobBookmarkKeys
- The name of the job bookmark keys on which to sort.public void setJobBookmarkKeysSortOrder(String jobBookmarkKeysSortOrder)
Specifies an ascending or descending sort order.
jobBookmarkKeysSortOrder
- Specifies an ascending or descending sort order.public String getJobBookmarkKeysSortOrder()
Specifies an ascending or descending sort order.
public JDBCConnectorOptions withJobBookmarkKeysSortOrder(String jobBookmarkKeysSortOrder)
Specifies an ascending or descending sort order.
jobBookmarkKeysSortOrder
- Specifies an ascending or descending sort order.public Map<String,String> getDataTypeMapping()
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the
option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type FLOAT
into
the Java String
type by calling the ResultSet.getString()
method of the driver, and
uses it to build the Glue record. The ResultSet
object is implemented by each driver, so the
behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the
driver performs the conversions.
"dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
FLOAT
into the Java String
type by calling the
ResultSet.getString()
method of the driver, and uses it to build the Glue record. The
ResultSet
object is implemented by each driver, so the behavior is specific to the driver
you use. Refer to the documentation for your JDBC driver to understand how the driver performs the
conversions.public void setDataTypeMapping(Map<String,String> dataTypeMapping)
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the
option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type FLOAT
into
the Java String
type by calling the ResultSet.getString()
method of the driver, and
uses it to build the Glue record. The ResultSet
object is implemented by each driver, so the
behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the
driver performs the conversions.
dataTypeMapping
- Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example,
the option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
FLOAT
into the Java String
type by calling the
ResultSet.getString()
method of the driver, and uses it to build the Glue record. The
ResultSet
object is implemented by each driver, so the behavior is specific to the driver you
use. Refer to the documentation for your JDBC driver to understand how the driver performs the
conversions.public JDBCConnectorOptions withDataTypeMapping(Map<String,String> dataTypeMapping)
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the
option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type FLOAT
into
the Java String
type by calling the ResultSet.getString()
method of the driver, and
uses it to build the Glue record. The ResultSet
object is implemented by each driver, so the
behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the
driver performs the conversions.
dataTypeMapping
- Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example,
the option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
FLOAT
into the Java String
type by calling the
ResultSet.getString()
method of the driver, and uses it to build the Glue record. The
ResultSet
object is implemented by each driver, so the behavior is specific to the driver you
use. Refer to the documentation for your JDBC driver to understand how the driver performs the
conversions.public JDBCConnectorOptions addDataTypeMappingEntry(String key, String value)
public JDBCConnectorOptions clearDataTypeMappingEntries()
public String toString()
toString
in class Object
Object.toString()
public JDBCConnectorOptions clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.