Developing custom connectors
You can write the code that reads data from or writes data to your data store and formats the data for use with Amazon Glue Studio jobs. You can create connectors for Spark, Athena, and JDBC data stores. Sample code posted on GitHub provides an overview of the basic interfaces you need to implement.
You will need a local development environment for creating your connector code. You can use any IDE or even just a command line editor to write your connector. Examples of development environments include:
-
A local Scala environment with a local Amazon Glue ETL Maven library, as described in Developing Locally with Scala in the Amazon Glue Developer Guide.
-
IntelliJ IDE, by downloading the IDE from https://www.jetbrains.com/idea/
.
Topics
Developing Spark connectors
You can create a Spark connector with Spark DataSource API V2 (Spark 2.4) to read data.
To create a custom Spark connector
Follow the steps in the Amazon Glue GitHub sample library for developing Spark connectors, which is located at aws-glue-samples/tree/master/GlueCustomConnectors/development/Spark/README.md.
Developing Athena connectors
You can create an Athena connector to be used by Amazon Glue and Amazon Glue Studio to query a custom data source.
To create a custom Athena connector
Follow the steps in the Amazon Glue GitHub sample library for developing Athena connectors, which is located at aws-glue-samples/tree/master/GlueCustomConnectors/development/Athena.
Developing JDBC connectors
You can create a connector that uses JDBC to access your data stores.
To create a custom JDBC connector
-
Install the Amazon Glue Spark runtime libraries in your local development environment. Refer to the instructions in the Amazon Glue GitHub sample library at aws-glue-samples/tree/master/GlueCustomConnectors/development/GlueSparkRuntime/README.md.
-
Implement the JDBC driver that is responsible for retrieving the data from the data source. Refer to the Java Documentation
for Java SE 8. Create an entry point within your code that Amazon Glue Studio uses to locate your connector. The Class name field should be the full path of your JDBC driver.
-
Use the
GlueContext
API to read data with the connector. Users can add more input options in the Amazon Glue Studio console to configure the connection to the data source, if necessary. For a code example that shows how to read from and write to a JDBC database with a custom JDBC connector, see Custom and Amazon Web Services Marketplace connectionType values.
Examples of using custom connectors with Amazon Glue Studio
You can refer to the following blogs for examples of using custom connectors:
-
Developing, testing, and deploying custom connectors for your data stores with Amazon Glue
-
Apache Hudi: Writing to Apache Hudi tables using Amazon Glue Custom Connector
-
Google BigQuery: Migrating data from Google BigQuery to Amazon S3 using Amazon Glue custom connectors
-
Snowflake (JDBC): Performing data transformations using Snowflake and Amazon Glue
-
SingleStore: Building fast ETL using SingleStore and Amazon Glue
Salesforce: Ingest Salesforce data into Amazon S3 using the CData JDBC custom connector with Amazon Glue
- -
MongoDB: Building Amazon Glue Spark ETL jobs using Amazon DocumentDB (with MongoDB compatibility) and MongoDB
-
Amazon Relational Database Service (Amazon RDS): Building Amazon Glue Spark ETL jobs by bringing your own JDBC drivers for Amazon RDS
-
MySQL (JDBC): https://github.com/aws-samples/aws-glue-samples/blob/master/GlueCustomConnectors/development/Spark/SparkConnectorMySQL.scala
Developing Amazon Glue connectors for Amazon Web Services Marketplace
As an Amazon partner, you can create custom connectors and upload them to Amazon Web Services Marketplace to sell to Amazon Glue customers.
The process for developing the connector code is the same as for custom connectors, but the process of uploading and verifying the connector code is more detailed. Refer to the instructions in Creating Connectors for Amazon Web Services Marketplace on the GitHub website.