Getting started with Aurora zero-ETL integrations
Before you create a zero-ETL integration, configure your Aurora DB cluster and your data warehouse with the required parameters and permissions. During setup, you'll complete the following steps:
After you complete these tasks, continue to Creating Aurora zero-ETL integrations with Amazon Redshift or Creating Aurora zero-ETL integrations with an Amazon SageMaker lakehouse.
You can use the Amazon SDKs to automate the setup process for you. For more information, see Set up an integration using the Amazon SDKs.
Tip
You can have RDS complete these setup steps for you while you're creating the integration, rather than performing them manually. To immediately start creating an integration, see Creating Aurora zero-ETL integrations with Amazon Redshift.
For Step 3, you can choose to create either a target data warehouse (Step 3a) or a target lakehouse (Step 3b) depending on your needs:
-
Choose a data warehouse if you need traditional data warehousing capabilities with SQL-based analytics.
-
Choose a Amazon SageMaker lakehouse if you need machine learning capabilities and want to use lakehouse features for data science and ML workflows.
Step 1: Create a custom DB cluster parameter group
Aurora zero-ETL integrations require specific values for the
DB cluster parameters that control replication. Specifically, Aurora MySQL requires
enhanced binlog (aurora_enhanced_binlog
), and
Aurora PostgreSQL requires enhanced logical replication
(aurora.enhanced_logical_replication
).
To configure binary logging or logical replication, you must first create a custom DB cluster parameter group, and then associate it with the source DB cluster.
Aurora MySQL (aurora-mysql8.0 family):
-
aurora_enhanced_binlog=1
-
binlog_backup=0
-
binlog_format=ROW
-
binlog_replication_globaldb=0
-
binlog_row_image=full
-
binlog_row_metadata=full
In addition, make sure that the binlog_transaction_compression
parameter is not set to ON
, and that the
binlog_row_value_options
parameter is not set
to PARTIAL_JSON
.
For more information about Aurora MySQL enhanced binlog, see Setting up enhanced binlog for Aurora MySQL.
Aurora PostgreSQL (aurora-postgresql16 family):
-
rds.logical_replication=1
-
aurora.enhanced_logical_replication=1
-
aurora.logical_replication_backup=0
-
aurora.logical_replication_globaldb=0
Enabling enhanced logical replication
(aurora.enhanced_logical_replication
) will always write all column
values to the write ahead log (WAL) even if REPLICA IDENTITY FULL
isn't
enabled. This might increase the IOPS for your source DB cluster.
Important
If you enable or disable the aurora.enhanced_logical_replication
DB
cluster parameter, the primary DB instance invalidates all logical replication
slots. This stops replication from the source to the target, and you must recreate
replication slots on the primary DB instance. To prevent interruptions, keep the
parameter state consistent during replication.
Step 2: Select or create a source DB cluster
After you create a custom DB cluster parameter group, choose or create an Aurora DB cluster. This cluster will be the source of data replication to the target data warehouse. You can specify a DB cluster that uses provisioned DB instances or Aurora Serverless v2 DB instances as the source. For instructions to create a DB cluster, see Creating an Amazon Aurora DB cluster or Creating a DB cluster that uses Aurora Serverless v2.
The database must be running a supported DB engine version. For a list of supported versions, see Supported Regions and Aurora DB engines for zero-ETL integrations.
When you create the database, under Additional configuration, change the default DB cluster parameter group to the custom parameter group that you created in the previous step.
Note
If you associate the parameter group with the DB cluster after the cluster is already created, you must reboot the primary DB instance in the cluster to apply the changes before you can create a zero-ETL integration. For instructions, see Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance.
Step 3a: Create a target data warehouse
After you create your source DB cluster, you must create and configure a target data warehouse. The data warehouse must meet the following requirements:
-
Using an RA3 node type with at least two nodes, or Redshift Serverless.
-
Encrypted (if using a provisioned cluster). For more information, see Amazon Redshift database encryption.
For instructions to create a data warehouse, see Creating a cluster for provisioned clusters, or Creating a workgroup with a namespace for Redshift Serverless.
Enable case sensitivity on the data warehouse
For the integration to be successful, the case sensitivity parameter (enable_case_sensitive_identifier
) must be enabled for
the data warehouse. By default, case sensitivity is disabled on all provisioned
clusters and Redshift Serverless workgroups.
To enable case sensitivity, perform the following steps depending on your data warehouse type:
-
Provisioned cluster – To enable case sensitivity on a provisioned cluster, create a custom parameter group with the
enable_case_sensitive_identifier
parameter enabled. Then, associate the parameter group with the cluster. For instructions, see Managing parameter groups using the console or Configuring parameter values using the Amazon CLI.Note
Remember to reboot the cluster after you associate the custom parameter group with it.
-
Serverless workgroup – To enable case sensitivity on a Redshift Serverless workgroup, you must use the Amazon CLI. The Amazon Redshift console doesn't currently support modifying Redshift Serverless parameter values. Send the following update-workgroup request:
aws redshift-serverless update-workgroup \ --workgroup-name
target-workgroup
\ --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=trueYou don't need to reboot a workgroup after you modify its parameter values.
Configure authorization for the data warehouse
After you create a data warehouse, you must configure the source Aurora DB cluster as an authorized integration source. For instructions, see Configure authorization for your Amazon Redshift data warehouse.
Set up an integration using the Amazon SDKs
Rather than setting up each resource manually, you can run the following Python script
to automatically set up the required resources for you. The code example uses the Amazon SDK for Python (Boto3)
To install the required dependencies, run the following commands:
pip install boto3 pip install time
Within the script, optionally modify the names of the source, target, and parameter
groups. The final function creates an integration named my-integration
after the resources are set up.
Step 3b: Create an Amazon Glue catalog for Amazon SageMaker zero-ETL integration
When creating a zero-ETL integration with an Amazon SageMaker lakehouse, you must create an Amazon Glue managed catalog in Amazon Lake Formation. The target catalog must be an Amazon Redshift managed catalog. To create an Amazon Redshift managed catalog, first create the AWSServiceRoleForRedshift
service-linked role. In the Lake Formation console, add the AWSServiceRoleForRedshift
as a read-only administrator.
For more information about the previous tasks, see the following topics.
For information about creating an Amazon Redshift managed catalog, see Creating an Amazon Redshift managed catalog in the Amazon Glue Data Catalog in the Amazon Lake Formation Developer Guide.
For information about the service-linked role for Amazon Redshift, see Using service-linked roles for Amazon Redshift in the Amazon Redshift Management Guide.
For information about read-only administrator permissions for Lake Formation, see Lake Formation personas and IAM permissions reference in the Amazon Lake Formation Developer Guide.
Configure permissions for the target Amazon Glue catalog
Before creating a target catalog for zero-ETL integration, you must create the Lake Formation target creation role and the Amazon Glue data transfer role. Use the Lake Formation target creation role to create the target catalog. When creating the target catalog, enter the Glue data transfer role in the IAM role field in the Access from engines section.
The target creation role must be a Lake Formation administrator and requires the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "lakeformation:RegisterResource", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:PutEncryptionConfiguration", "iam:PassRole", "glue:CreateCatalog", "glue:GetCatalog", "s3:PutBucketTagging", "s3:PutLifecycleConfiguration", "s3:PutBucketPolicy", "s3:CreateBucket", "redshift-serverless:CreateNamespace", "s3:DeleteBucket", "s3:PutBucketVersioning", "redshift-serverless:CreateWorkgroup" ], "Resource": [ "arn:aws:glue:*:
account-id
:catalog", "arn:aws:glue:*:account-id
:catalog/*", "arn:aws:s3:::*", "arn:aws:redshift-serverless:*:account-id
:workgroup/*", "arn:aws:redshift-serverless:*:account-id
:namespace/*", "arn:aws:iam::account-id
:role/GlueDataCatalogDataTransferRole" ] } ] }
The target creation role must have the following trust relationship.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "glue.amazonaws.com" }, "Action": "sts:AssumeRole" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::
account-id
:user/Username" }, "Action": "sts:AssumeRole" } ] }
The Glue data transfer role is required for MySQL catalog operations and must have the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "DataTransferRolePolicy", "Effect": "Allow", "Action": [ "kms:GenerateDataKey", "kms:Decrypt", "glue:GetCatalog", "glue:GetDatabase" ], "Resource": [ "*" ] } ] }
The Glue data transfer role must have the following trust relationship.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "glue.amazonaws.com", "redshift.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
Next steps
With a source Aurora DB cluster and either an Amazon Redshift target data warehouse or Amazon SageMaker lakehouse, you can create a zero-ETL integration and replicate data. For instructions, see Creating Aurora zero-ETL integrations with Amazon Redshift.