Troubleshooting Aurora zero-ETL integrations
You can check the state of a zero-ETL integration by querying the SVV_INTEGRATION system table in the
            analytics destination. If the state column has a value of
                ErrorState, it means something's wrong. For more information, see Monitoring integrations using system tables for Amazon Redshift.
Use the following information to troubleshoot common issues with Aurora zero-ETL integrations.
Important
Resync and refresh operations are not available for zero-ETL integrations with an Amazon SageMaker AI lakehouse. If there are issues with an integration, you must delete the integration and create a new integration. You can't refresh or resync an existing integration.
Topics
I can't create a zero-ETL integration
If you can't create a zero-ETL integration, make sure that the following are correct for your source database:
- 
                    Your source database must be running a supported DB engine version. For a list of supported versions, see Supported Regions and Aurora DB engines for zero-ETL integrations. 
- 
                    You correctly configured DB parameters. If the required parameters are set incorrectly or not associated with the database, creation fails. See Step 1: Create a custom DB cluster parameter group. 
In addition, make sure the following are correct for your target data warehouse:
- 
                    Case sensitivity is enabled. See Turn on case sensitivity for your data warehouse. 
- 
                    You added the correct authorized principal and integration source. See Configure authorization for your Amazon Redshift data warehouse. 
- 
                    The data warehouse is encrypted (if it's a provisioned cluster). See Amazon Redshift database encryption. 
My integration is stuck in a state
                    of Syncing
            Your integration might consistently show a status of Syncing if you
                change the value of one of the required DB parameters.
To fix this issue, check the values of the parameters in the parameter group associated with the source DB cluster, and make sure that they match the required values. For more information, see Step 1: Create a custom DB cluster parameter group.
If you modify any parameters, make sure to reboot the DB cluster to apply the changes.
My tables aren't replicating to Amazon Redshift
If you don't see one or more tables reflected in Amazon Redshift, you can run the following command to resynchronize them:
ALTER DATABASEdbnameINTEGRATION REFRESH TABLEStable1,table2;
For more information, see ALTER DATABASE in the Amazon Redshift SQL reference.
Your data might not be replicating because one or more of your source tables
                doesn't have a primary key. The monitoring dashboard in Amazon Redshift displays the status of
                these tables as Failed, and the status of the overall zero-ETL integration
                changes to Needs attention. To resolve this issue, you can identify an
                existing key in your table that can become a primary key, or you can add a synthetic
                primary key. For detailed solutions, see 
                the following resources:
One or more of my Amazon Redshift tables requires a resync
Running certain commands on your source database might require your tables to be
                resynchronized. In these cases, the SVV_INTEGRATION_TABLE_STATE system view shows a
                    table_state of ResyncRequired, which means that the
                integration must completely reload data for that specific table from MySQL to
                Amazon Redshift.
When the table starts to resynchronize, it enters a state of Syncing.
                You don't need to take any manual action to resynchronize a table. While table data
                is resynchronizing, you can't access it in Amazon Redshift.
The following are some example operations that can put a table into a
                    ResyncRequired state, and possible alternatives to consider.
| Operation | Example | Alternative | 
|---|---|---|
| Adding a column into a specific position | 
 | Amazon Redshift doesn't support adding columns into specific positions using firstorafterkeywords. If the order
                                of columns in the target table isn't critical, add the column to the
                                end of the table using a simpler
                                command:
 | 
| Adding a timestamp column with the default CURRENT_TIMESTAMP | 
 | The CURRENT_TIMESTAMPvalue for existing table rows
                                is calculated by Aurora MySQL and can't be simulated in Amazon Redshift without full
                                table data resynchronization.If possible, switch the default
                                    value to a literal constant like  | 
| Performing multiple column operations within a single command | 
 | Consider splitting the command into two separate operations, ADDandRENAME, which won't require
                                resynchronization. | 
Integration failed issues for Amazon SageMaker AI lakehouse zero-ETL integrations
If you encounter issues with an existing zero-ETL integration with an Amazon SageMaker AI lakehouse, the only resolution is to delete the integration and create a new one. Unlike other Amazon services, zero-ETL integrations do not support refresh or resync operations.
To resolve integration issues:
- 
                    Delete the problematic zero-ETL integration using the console, CLI, or API. 
- 
                    Verify that the source database and target data warehouse configurations are correct. 
- 
                    Create a new zero-ETL integration with the same or updated configuration. 
This process will result in a complete re-initialization of the data pipeline, which may take time depending on the size of your source database.
DDL changes are in Amazon Redshift before the DDL transaction is complete for Aurora PostgreSQL
DDL changes can appear in Amazon Redshift before a DDL operation finishes in Aurora PostgreSQL zero-ETL integrations. For more information, see DDL operations for Aurora PostgreSQL.