Amazon Glue connection properties
This topic includes information about properties for Amazon Glue connections.
Topics
Required connection properties
When you define a connection on the Amazon Glue console, you must provide values for the following properties:
- Connection name
-
Enter a unique name for your connection.
- Connection type
-
Choose JDBC or one of the specific connection types.
For details about the JDBC connection type, see Amazon Glue JDBC connection properties
Choose Network to connect to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC)).
Depending on the type that you choose, the Amazon Glue console displays other required fields. For example, if you choose Amazon RDS, you must then choose the database engine.
- Require SSL connection
-
When you select this option, Amazon Glue must verify that the connection to the data store is connected over a trusted Secure Sockets Layer (SSL).
For more information, including additional options that are available when you select this option, see Amazon Glue SSL connection properties.
- Select MSK cluster (Amazon managed streaming for Apache Kafka (MSK) only)
-
Specifies an MSK cluster from another Amazon account.
- Kafka bootstrap server URLs (Kafka only)
-
Specifies a comma-separated list of bootstrap server URLs. Include the port number. For example: b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094, b-2.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094, b-3.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
Amazon Glue JDBC connection properties
Amazon Glue can connect to the following data stores through a JDBC connection:
-
Amazon Redshift
-
Amazon Aurora
-
Microsoft SQL Server
-
MySQL
-
Oracle
-
PostgreSQL
-
Snowflake, when using Amazon Glue crawlers.
-
Aurora (supported if the native JDBC driver is being used. Not all driver features can be leveraged)
-
Amazon RDS for MariaDB
Important
Currently, an ETL job can use JDBC connections within only one subnet. If you have multiple data stores in a job, they must be on the same subnet, or accessible from the subnet.
If you choose to bring in your own JDBC driver versions for Amazon Glue crawlers, your crawlers will consume resources in Amazon Glue jobs and Amazon S3 to ensure your provided drivers are run in your environment. The additional usage of resources will be reflected in your account. Additionally, providing your own JDBC driver does not mean that the crawler is able to leverage all of the driver’s features. Drivers are limited to the properties described in Defining connections in the Data Catalog.
The following are additional properties for the JDBC connection type.
- JDBC URL
-
Enter the URL for your JDBC data store. For most database engines, this field is in the following format. In this format, replace
protocol
,host
,port
, anddb_name
with your own information.jdbc:
protocol
://host
:port
/db_name
Depending on the database engine, a different JDBC URL format might be required. This format can have slightly different use of the colon (:) and slash (/) or different keywords to specify databases.
For JDBC to connect to the data store, a
db_name
in the data store is required. Thedb_name
is used to establish a network connection with the suppliedusername
andpassword
. When connected, Amazon Glue can access other databases in the data store to run a crawler or run an ETL job.The following JDBC URL examples show the syntax for several database engines.
-
To connect to an Amazon Redshift cluster data store with a
dev
database:jdbc:redshift://xxx.us-east-1.redshift.amazonaws.com:8192/dev
-
To connect to an Amazon RDS for MySQL data store with an
employee
database:jdbc:mysql://xxx-cluster.cluster-xxx.us-east-1.rds.amazonaws.com:3306/employee
-
To connect to an Amazon RDS for PostgreSQL data store with an
employee
database:jdbc:postgresql://xxx-cluster.cluster-xxx.us-east-1.rds.amazonaws.com:5432/employee
-
To connect to an Amazon RDS for Oracle data store with an
employee
service name:jdbc:oracle:thin://@xxx-cluster.cluster-xxx.us-east-1.rds.amazonaws.com:1521/employee
The syntax for Amazon RDS for Oracle can follow the following patterns. In these patterns, replace
host
,port
,service_name
, andSID
with your own information.-
jdbc:oracle:thin://@
host
:port
/service_name
-
jdbc:oracle:thin://@
host
:port
:SID
-
-
To connect to an Amazon RDS for Microsoft SQL Server data store with an
employee
database:jdbc:sqlserver://xxx-cluster.cluster-xxx.us-east-1.rds.amazonaws.com:1433;databaseName=employee
The syntax for Amazon RDS for SQL Server can follow the following patterns. In these patterns, replace
server_name
,port
, anddb_name
with your own information.-
jdbc:sqlserver://
server_name
:port
;database=db_name
-
jdbc:sqlserver://
server_name
:port
;databaseName=db_name
-
-
To connect to an Amazon Aurora PostgreSQL instance of the
employee
database, specify the endpoint for the database instance, the port, and the database name:jdbc:postgresql://employee_instance_1.
xxxxxxxxxxxx
.us-east-2.rds.amazonaws.com:5432/employee -
To connect to an Amazon RDS for MariaDB data store with an
employee
database, specify the endpoint for the database instance, the port, and the database name:jdbc:mysql://
xxx
-cluster.cluster-xxx
.aws-region
.rds.amazonaws.com:3306/employee -
Warning
Snowflake JDBC connections are supported only by Amazon Glue crawlers. When using the Snowflake connector in Amazon Glue jobs, use the Snowflake connection type.
To connect to a Snowflake instance of the
sample
database, specify the endpoint for the snowflake instance, the user, the database name, and the role name. You can optionally add thewarehouse
parameter.jdbc:snowflake://
account_name
.snowflakecomputing.com/?user=user_name
&db=sample&role=role_name
&warehouse=warehouse_name
Important
For Snowflake connections over JDBC, the order of parameters in the URL is enforced and must be ordered as
user
,db
,role_name
, andwarehouse
. -
To connect to a Snowflake instance of the
sample
database with Amazon private link, specify the snowflake JDBC URL as follows:jdbc:snowflake://
account_name
.region
.privatelink.snowflakecomputing.com/?user=user_name
&db=sample&role=role_name
&warehouse=warehouse_name
-
- Username
-
Note
We recommend that you use an Amazon secret to store connection credentials instead of supplying your user name and password directly. For more information, see Storing connection credentials in Amazon Secrets Manager.
Provide a user name that has permission to access the JDBC data store.
- Password
-
Enter the password for the user name that has access permission to the JDBC data store.
- Port
-
Enter the port used in the JDBC URL to connect to an Amazon RDS Oracle instance. This field is only shown when Require SSL connection is selected for an Amazon RDS Oracle instance.
- VPC
-
Choose the name of the virtual private cloud (VPC) that contains your data store. The Amazon Glue console lists all VPCs for the current Region.
Important
When working over a JDBC connection which is hosted off of Amazon, such as with data from Snowflake, your VPC should have a NAT gateway which splits traffic into public and private subnets. The public subnet is used for connection to the external source, and the internal subnet is used for processing by Amazon Glue. For information on configuring your Amazon VPC for external connections, read Connect to the internet or other networks using NAT devices and Setting up a VPC to connect to Amazon RDS data stores over JDBC for Amazon Glue.
- Subnet
-
Choose the subnet within the VPC that contains your data store. The Amazon Glue console lists all subnets for the data store in your VPC.
- Security groups
-
Choose the security groups that are associated with your data store. Amazon Glue requires one or more security groups with an inbound source rule that allows Amazon Glue to connect. The Amazon Glue console lists all security groups that are granted inbound access to your VPC. Amazon Glue associates these security groups with the elastic network interface that is attached to your VPC subnet.
- JDBC Driver Class name - optional
-
Provide the custom JDBC driver class name:
-
Postgres – org.postgresql.Driver
-
MySQL – com.mysql.jdbc.Driver, com.mysql.cj.jdbc.Driver
-
Redshift – com.amazon.redshift.jdbc.Driver, com.amazon.redshift.jdbc42.Driver
-
Oracle – oracle.jdbc.driver.OracleDriver
-
SQL Server – com.microsoft.sqlserver.jdbc.SQLServerDriver
-
- JDBC Driver S3 Path - optional
-
Provide the Amazon S3 location to the custom JDBC driver. This is an absolute path to a .jar file. If you want to provide your own JDBC drivers to connect to your data souces for your crawler-supported databases, you can specify values for parameters
customJdbcDriverS3Path
andcustomJdbcDriverClassName
. Using a JDBC driver supplied by a customer is limited to the required Required connection properties.
Amazon Glue MongoDB and MongoDB Atlas connection properties
The following are additional properties for the MongoDB or MongoDB Atlas connection type.
- MongoDB URL
-
Enter the URL for your MongoDB or MongoDB Atlas data store:
For MongoDB: mongodb://host:port/database. The host can be a hostname, IP address, or UNIX domain socket. If the connection string doesn't specify a port, it uses the default MongoDB port, 27017.
For MongoDB Atlas: mongodb+srv://server.example.com/database. The host can be a hostname that follows corresponds to a DNS SRV record. The SRV format does not require a port and will use the default MongoDB port, 27017.
- Username
-
Note
We recommend that you use an Amazon secret to store connection credentials instead of supplying your user name and password directly. For more information, see Storing connection credentials in Amazon Secrets Manager.
Provide a user name that has permission to access the JDBC data store.
- Password
-
Enter the password for the user name that has access permission to the MongoDB or MongoDB Atlas data store.
- VPC
-
Choose the name of the virtual private cloud (VPC) that contains your data store. The Amazon Glue console lists all VPCs for the current Region.
- Subnet
-
Choose the subnet within the VPC that contains your data store. The Amazon Glue console lists all subnets for the data store in your VPC.
- Security groups
-
Choose the security groups that are associated with your data store. Amazon Glue requires one or more security groups with an inbound source rule that allows Amazon Glue to connect. The Amazon Glue console lists all security groups that are granted inbound access to your VPC. Amazon Glue associates these security groups with the elastic network interface that is attached to your VPC subnet.
Snowflake connection
The following properties are used to set up a Snowflake connection used in Amazon Glue ETL jobs. When crawling Snowflake, use a JDBC connection.
- Snowflake URL
-
The URL of your Snowflake endpoint. For more information about Snowflake endpoint URLs, see Connecting to Your Accounts
in the Snowflake documentation. - Amazon Secret
The Secret name of a secret in Amazon Secrets Manager. Amazon Glue will connect to Snowflake using the
sfUser
andsfPassword
keys of your secret.- Snowflake role (optional)
A Snowflake security role Amazon Glue will use when connecting.
Use the following properties when configuring a connection to a Snowflake endpoint hosted in Amazon VPC using Amazon PrivateLink.
- VPC
-
Choose the name of the virtual private cloud (VPC) that contains your data store. The Amazon Glue console lists all VPCs for the current Region.
- Subnet
-
Choose the subnet within the VPC that contains your data store. The Amazon Glue console lists all subnets for the data store in your VPC.
- Security groups
-
Choose the security groups that are associated with your data store. Amazon Glue requires one or more security groups with an inbound source rule that allows Amazon Glue to connect. The Amazon Glue console lists all security groups that are granted inbound access to your VPC. Amazon Glue associates these security groups with the elastic network interface that is attached to your VPC subnet.
Amazon Glue SSL connection properties
The following are details about the Require SSL connection property.
If you do not require SSL connection, Amazon Glue ignores failures when it uses SSL to encrypt a connection to the data store. See the documentation for your data store for configuration instructions. When you select this option, the job run, crawler, or ETL statements in a development endpoint fail when Amazon Glue cannot connect.
Note
Snowflake supports an SSL connection by default, so this property is not applicable for Snowflake.
This option is validated on the Amazon Glue client side. For JDBC connections, Amazon Glue only connects over SSL with certificate and host name validation. SSL connection support is available for:
-
Oracle Database
-
Microsoft SQL Server
-
PostgreSQL
-
Amazon Redshift
-
MySQL (Amazon RDS instances only)
-
Amazon Aurora MySQL (Amazon RDS instances only)
-
Amazon Aurora PostgreSQL (Amazon RDS instances only)
-
Kafka, which includes Amazon Managed Streaming for Apache Kafka
-
MongoDB
Note
To enable an Amazon RDS Oracle data store to use Require SSL connection, you must create and attach an option group to the Oracle instance.
Sign in to the Amazon Web Services Management Console and open the Amazon RDS console at https://console.amazonaws.cn/rds/
. -
Add an Option group to the Amazon RDS Oracle instance. For more information about how to add an option group on the Amazon RDS console, see Creating an Option Group
-
Add an Option to the option group for SSL. The Port you specify for SSL is later used when you create an Amazon Glue JDBC connection URL for the Amazon RDS Oracle instance. For more information about how to add an option on the Amazon RDS console, see Adding an Option to an Option Group in the Amazon RDS User Guide. For more information about the Oracle SSL option, see Oracle SSL in the Amazon RDS User Guide.
-
On the Amazon Glue console, create a connection to the Amazon RDS Oracle instance. In the connection definition, select Require SSL connection. When requested, enter the Port that you used in the Amazon RDS Oracle SSL option.
The following additional optional properties are available when Require SSL connection is selected for a connection:
- Custom JDBC certificate in S3
-
If you have a certificate that you are currently using for SSL communication with your on-premises or cloud databases, you can use that certificate for SSL connections to Amazon Glue data sources or targets. Enter an Amazon Simple Storage Service (Amazon S3) location that contains a custom root certificate. Amazon Glue uses this certificate to establish an SSL connection to the database. Amazon Glue handles only X.509 certificates. The certificate must be DER-encoded and supplied in base64 encoding PEM format.
If this field is left blank, the default certificate is used.
- Custom JDBC certificate string
-
Enter certificate information specific to your JDBC database. This string is used for domain matching or distinguished name (DN) matching. For Oracle Database, this string maps to the
SSL_SERVER_CERT_DN
parameter in the security section of thetnsnames.ora
file. For Microsoft SQL Server, this string is used ashostNameInCertificate
.The following is an example for the Oracle Database
SSL_SERVER_CERT_DN
parameter.cn=sales,cn=OracleContext,dc=us,dc=example,dc=com
- Kafka private CA certificate location
-
If you have a certificate that you are currently using for SSL communication with your Kafka data store, you can use that certificate with your Amazon Glue connection. This option is required for Kafka data stores, and optional for Amazon Managed Streaming for Apache Kafka data stores. Enter an Amazon Simple Storage Service (Amazon S3) location that contains a custom root certificate. Amazon Glue uses this certificate to establish an SSL connection to the Kafka data store. Amazon Glue handles only X.509 certificates. The certificate must be DER-encoded and supplied in base64 encoding PEM format.
- Skip certificate validation
-
Select the Skip certificate validation check box to skip validation of the custom certificate by Amazon Glue. If you choose to validate, Amazon Glue validates the signature algorithm and subject public key algorithm for the certificate. If the certificate fails validation, any ETL job or crawler that uses the connection fails.
The only permitted signature algorithms are SHA256withRSA, SHA384withRSA, or SHA512withRSA. For the subject public key algorithm, the key length must be at least 2048.
- Kafka client keystore location
-
The Amazon S3 location of the client keystore file for Kafka client side authentication. Path must be in the form s3://bucket/prefix/filename.jks. It must end with the file name and .jks extension.
- Kafka client keystore password (optional)
-
The password to access the provided keystore.
- Kafka client key password (optional)
-
A keystore can consist of multiple keys, so this is the password to access the client key to be used with the Kafka server side key.
Apache Kafka connection properties for client authentication
Amazon Glue supports the Simple Authentication and Security Layer (SASL) framework for authentication when you create an Apache Kafka connection. The SASL framework supports various mechanisms of authentication, and Amazon Glue offers both the SCRAM protocol (user name and password) and GSSAPI (Kerberos protocol).
Use Amazon Glue Studio to configure one of the following client authentication methods. For more information, see Creating connections for connectors in the Amazon Glue Studio user guide.
-
None - No authentication. This is useful if creating a connection for testing purposes.
-
SASL/SCRAM-SHA-512 - Choosing this authentication method will allow you to specify authentication credentials. There are two options available:
-
Use Amazon Secrets Manager (recommended) - if you select this option, you can store your user name and password in Amazon Secrets Manager and let Amazon Glue access them when needed. Specify the secret that stores the SSL or SASL authentication credentials. For more information, see Storing connection credentials in Amazon Secrets Manager.
-
Provide a user name and password directly.
-
-
SASL/GSSAPI (Kerberos) - if you select this option, you can select the location of the keytab file, krb5.conf file and enter the Kerberos principal name and Kerberos service name. The locations for the keytab file and krb5.conf file must be in an Amazon S3 location. Since MSK does not yet support SASL/GSSAPI, this option is only available for customer managed Apache Kafka clusters. For more information, see MIT Kerberos Documentation: Keytab
. -
SSL Client Authentication - if you select this option, you can you can select the location of the Kafka client keystore by browsing Amazon S3. Optionally, you can enter the Kafka client keystore password and Kafka client key password.