

# Connectivity issues when using Amazon Athena with Amazon Quick Sight
Athena issues

Following, you can find information about troubleshooting issues that you might encounter when using Amazon Athena with Amazon Quick Sight.

Before you try troubleshooting anything else for Athena, make sure that you can connect to Athena. For information about troubleshooting Athena connection issues, see [I can't connect to Amazon Athena](troubleshoot-connect-athena.md). 

If you can connect but have other issues, it can be useful to run your query in the Athena console ([https://console.amazonaws.cn/athena/](https://console.amazonaws.cn/athena/home)) before adding your query to Amazon Quick Sight. For additional troubleshooting information, see [Troubleshooting](https://docs.amazonaws.cn/athena/latest/ug/troubleshooting.html) in the *Athena User Guide*.

**Topics**
+ [

# Column not found when using Athena with Amazon Quick Sight
](troubleshoot-athena-column-not-found.md)
+ [

# Invalid data when using Athena with Amazon Quick Sight
](troubleshoot-athena-invalid-data.md)
+ [

# Query timeout when using Athena with Amazon Quick Sight
](troubleshoot-athena-query-timeout.md)
+ [

# Staging bucket no longer exists when using Athena with Amazon Quick Sight
](troubleshoot-athena-missing-bucket.md)
+ [

# Table incompatible when using Amazon Glue with Athena in Amazon Quick Sight
](troubleshoot-athena-glue-table-not-upgraded.md)
+ [

# Table not found when using Athena with Amazon Quick Sight
](troubleshoot-athena-table-not-found.md)
+ [

# Workgroup and output errors when using Athena with Quick Sight
](troubleshoot-athena-workgroup.md)

# Column not found when using Athena with Amazon Quick Sight
Athena column not found

You can receive a "`column not found`" error if the columns in an analysis are missing from the Athena data source. 

In Amazon Quick Sight, open your analysis. On the **Visualize** tab, choose **Choose dataset**, **Edit analysis data sets**. 

On the **Data sets in this analysis** screen, choose **Edit** near your dataset to refresh the dataset. Amazon Quick Sight caches the schema for two minutes. So it can take two minutes before the latest changes display. 

To investigate how the column was lost in the first place, you can go to the Athena console ([https://console.amazonaws.cn/athena/](https://console.amazonaws.cn/athena/home)) and check the query history to find queries that edited the table.

If this error happened when you were editing a custom SQL query in preview, verify that the name of the column in the query, and check for any other syntax errors. For example, check that the column name isn't enclosed in single quotation marks, which are reserved for strings.

If you still have the issue, verify that your tables, columns, and queries comply with Athena requirements. For more information, see [Names for Tables, Databases, and Columns](https://docs.amazonaws.cn/athena/latest/ug/tables-databases-columns-names.html) and [Troubleshooting](https://docs.amazonaws.cn/athena/latest/ug/troubleshooting.html) in the *Athena User Guide*.

# Invalid data when using Athena with Amazon Quick Sight
Athena invalid data

An invalid data error can occur when you use any operator or function in a calculated field. To address this, verify that the data in the table is consistent with the format that you supplied to the function.

For example, suppose that you are using the function `parseDate(expression, [‘format’], [‘time_zone’])` as **parseDate(date\$1column, ‘MM/dd/yyyy’)**. In this case, all values in `date_column` must conform to `'MM/dd/yyyy'` format (`’05/12/2016’`). Any value that isn't in this format (**‘2016/12/05’**) can cause an error.

# Query timeout when using Athena with Amazon Quick Sight
Athena query timeout

If your query times out, you can try these options to resolve your problem.

If the failure was generated while working on an analysis, remember that the Amazon Quick Sight timeout for generating any visual is two minutes. If you're using a custom SQL query, you can simplify your query to optimize running time. 

If you are in direct query mode (not using SPICE), you can try importing your data to SPICE. However, if your query exceeds the Athena 30-minute timeout, you might get another timeout while importing data into SPICE. For the most current information on Athena limits, see [Amazon Athena Limits](https://docs.amazonaws.cn/general/latest/gr/aws_service_limits.html#amazon-athena-limits) in the *Amazon Web Services General Reference*.

# Staging bucket no longer exists when using Athena with Amazon Quick Sight
Athena staging bucket missing

Use this section to help solve this error: "**The staging bucket for this query result no longer exists in the underlying data source.**"

 When you create a dataset using Athena, Amazon Quick Sight creates an Amazon S3 bucket. By default, this bucket has a name similar to "`aws-athena-query-results-<REGION>-<ACCOUNTID>`". If you remove this bucket, then your next Athena query might fail with an error saying the staging bucket no longer exists. 

 To fix this error, create a new bucket with the same name in the correct Amazon Web Services Region. 

# Table incompatible when using Amazon Glue with Athena in Amazon Quick Sight
Amazon Glue table incompatible with Athena

If you are getting errors when using Amazon Glue tables in Athena with Amazon Quick Sight, it might be because you're missing some metadata. Follow these steps to find out if your tables don't have the `TableType` attribute that Amazon Quick Sight needs for the Athena connector to work. Usually, the metadata for these tables wasn't migrated to the Amazon Glue Data Catalog. For more information, see [Upgrading to the Amazon Glue Data Catalog Step-by-Step](https://docs.amazonaws.cn/athena/latest/ug/glue-upgrade.html) in the* Amazon Glue Developer Guide.*

If you don't want to migrate to the Amazon Glue Data Catalog at this time, you have two options. You can recreate each Amazon Glue table through the Amazon Glue Management Console. Or you can use the Amazon CLI scripts listed in the following procedure to identify and update tables with missing `TableType` attributes.

If you prefer to use the CLI to do this, use the following procedure to help you design your scripts.

**To use the CLI to design scripts**

1. Use the CLI to learn which Amazon Glue tables have no `TableType` attributes.

   ```
   aws glue get-tables --database-name <your_datebase_name>;
   ```

   For example, you can run the following command in the CLI.

   ```
   aws glue get-table --database-name "test_database" --name "table_missing_table_type"
   ```

   Following is a sample of what the output looks like. You can see that the table `"table_missing_table_type"` doesn't have the `TableType` attribute declared.

   ```
   {
   		"TableList": [
   			{
   				"Retention": 0,
   				"UpdateTime": 1522368588.0,
   				"PartitionKeys": [
   					{
   						"Name": "year",
   						"Type": "string"
   					},
   					{
   						"Name": "month",
   						"Type": "string"
   					},
   					{
   						"Name": "day",
   						"Type": "string"
   					}
   				],
   				"LastAccessTime": 1513804142.0,
   				"Owner": "owner",
   				"Name": "table_missing_table_type",
   				"Parameters": {
   					"delimiter": ",",
   					"compressionType": "none",
   					"skip.header.line.count": "1",
   					"sizeKey": "75",
   					"averageRecordSize": "7",
   					"classification": "csv",
   					"objectCount": "1",
   					"typeOfData": "file",
   					"CrawlerSchemaDeserializerVersion": "1.0",
   					"CrawlerSchemaSerializerVersion": "1.0",
   					"UPDATED_BY_CRAWLER": "crawl_date_table",
   					"recordCount": "9",
   					"columnsOrdered": "true"
   				},
   				"StorageDescriptor": {
   					"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
   					"SortColumns": [],
   					"StoredAsSubDirectories": false,
   					"Columns": [
   						{
   							"Name": "col1",
   							"Type": "string"
   						},
   						{
   							"Name": "col2",
   							"Type": "bigint"
   						}
   					],
   					"Location": "s3://myAthenatest/test_dataset/",
   					"NumberOfBuckets": -1,
   					"Parameters": {
   						"delimiter": ",",
   						"compressionType": "none",
   						"skip.header.line.count": "1",
   						"columnsOrdered": "true",
   						"sizeKey": "75",
   						"averageRecordSize": "7",
   						"classification": "csv",
   						"objectCount": "1",
   						"typeOfData": "file",
   						"CrawlerSchemaDeserializerVersion": "1.0",
   						"CrawlerSchemaSerializerVersion": "1.0",
   						"UPDATED_BY_CRAWLER": "crawl_date_table",
   						"recordCount": "9"
   					},
   					"Compressed": false,
   					"BucketColumns": [],
   					"InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
   					"SerdeInfo": {
   						"Parameters": {
   						"field.delim": ","
   						},
   						"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"
   					}
   				}
   			}
   		]
   	}
   ```

1. Edit the table definition in your editor to add `"TableType": "EXTERNAL_TABLE"` to the table definition, as shown in the following example.

   ```
   {
   	"Table": {
   		"Retention": 0,
   		"TableType": "EXTERNAL_TABLE",
   		"PartitionKeys": [
   			{
   				"Name": "year",
   				"Type": "string"
   			},
   			{
   				"Name": "month",
   				"Type": "string"
   			},
   			{
   				"Name": "day",
   				"Type": "string"
   			}
   		],
   		"UpdateTime": 1522368588.0,
   		"Name": "table_missing_table_type",
   		"StorageDescriptor": {
   			"BucketColumns": [],
   			"SortColumns": [],
   			"StoredAsSubDirectories": false,
   			"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
   			"SerdeInfo": {
   				"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe",
   				"Parameters": {
   					"field.delim": ","
   				}
   			},
   			"Parameters": {
   				"classification": "csv",
   				"CrawlerSchemaSerializerVersion": "1.0",
   				"UPDATED_BY_CRAWLER": "crawl_date_table",
   				"columnsOrdered": "true",
   				"averageRecordSize": "7",
   				"objectCount": "1",
   				"sizeKey": "75",
   				"delimiter": ",",
   				"compressionType": "none",
   				"recordCount": "9",
   				"CrawlerSchemaDeserializerVersion": "1.0",
   				"typeOfData": "file",
   				"skip.header.line.count": "1"
   			},
   			"Columns": [
   				{
   					"Name": "col1",
   					"Type": "string"
   				},
   				{
   					"Name": "col2",
   					"Type": "bigint"
   				}
   			],
   			"Compressed": false,
   			"InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
   			"NumberOfBuckets": -1,
   			"Location": "s3://myAthenatest/test_date_part/"
   		},
   		"Owner": "owner",
   		"Parameters": {
   			"classification": "csv",
   			"CrawlerSchemaSerializerVersion": "1.0",
   			"UPDATED_BY_CRAWLER": "crawl_date_table",
   			"columnsOrdered": "true",
   			"averageRecordSize": "7",
   			"objectCount": "1",
   			"sizeKey": "75",
   			"delimiter": ",",
   			"compressionType": "none",
   			"recordCount": "9",
   			"CrawlerSchemaDeserializerVersion": "1.0",
   			"typeOfData": "file",
   			"skip.header.line.count": "1"
   		},
   		"LastAccessTime": 1513804142.0
   	}
   	}
   ```

1. You can adapt the following script to update the table input, so that it includes the `TableType` attribute.

   ```
   aws glue update-table --database-name <your_datebase_name> --table-input <updated_table_input>
   ```

   The following shows an example. 

   ```
   aws glue update-table --database-name test_database --table-input '
   	{
   			"Retention": 0,
   			"TableType": "EXTERNAL_TABLE",
   			"PartitionKeys": [
   				{
   					"Name": "year",
   					"Type": "string"
   				},
   				{
   					"Name": "month",
   					"Type": "string"
   				},
   				{
   					"Name": "day",
   					"Type": "string"
   				}
   			],
   			"Name": "table_missing_table_type",
   			"StorageDescriptor": {
   				"BucketColumns": [],
   				"SortColumns": [],
   				"StoredAsSubDirectories": false,
   				"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
   				"SerdeInfo": {
   					"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe",
   					"Parameters": {
   						"field.delim": ","
   					}
   				},
   				"Parameters": {
   					"classification": "csv",
   					"CrawlerSchemaSerializerVersion": "1.0",
   					"UPDATED_BY_CRAWLER": "crawl_date_table",
   					"columnsOrdered": "true",
   					"averageRecordSize": "7",
   					"objectCount": "1",
   					"sizeKey": "75",
   					"delimiter": ",",
   					"compressionType": "none",
   					"recordCount": "9",
   					"CrawlerSchemaDeserializerVersion": "1.0",
   					"typeOfData": "file",
   					"skip.header.line.count": "1"
   				},
   				"Columns": [
   					{
   						"Name": "col1",
   						"Type": "string"
   					},
   					{
   						"Name": "col2",
   						"Type": "bigint"
   					}
   				],
   				"Compressed": false,
   				"InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
   				"NumberOfBuckets": -1,
   				"Location": "s3://myAthenatest/test_date_part/"
   			},
   			"Owner": "owner",
   			"Parameters": {
   				"classification": "csv",
   				"CrawlerSchemaSerializerVersion": "1.0",
   				"UPDATED_BY_CRAWLER": "crawl_date_table",
   				"columnsOrdered": "true",
   				"averageRecordSize": "7",
   				"objectCount": "1",
   				"sizeKey": "75",
   				"delimiter": ",",
   				"compressionType": "none",
   				"recordCount": "9",
   				"CrawlerSchemaDeserializerVersion": "1.0",
   				"typeOfData": "file",
   				"skip.header.line.count": "1"
   			},
   			"LastAccessTime": 1513804142.0
   		}'
   ```

# Table not found when using Athena with Amazon Quick Sight
Athena Table not found

You can receive a "`table not found`" error if the tables in an analysis are missing from the Athena data source. 

In the Athena console ([https://console.amazonaws.cn/athena/](https://console.amazonaws.cn/athena/home)), check for your table under the corresponding schema. You can recreate the table in Athena and then create a new dataset in Amazon Quick Sight on that table. To investigate how the table was lost in the first place, you can use the Athena console to check the query history. Doing this helps you find the queries that dropped the table.

If this error happened when you were editing a custom SQL query in preview, verify that the name of the table in the query, and check for any other syntax errors. Amazon Quick Sight can't infer the schema from the query. The schema must be specified in the query. 

For example, the following statement works.

```
select from my_schema.my_table
```

The following statement fails because it's missing the schema.

```
select from my_table
```

If you still have the issue, verify that your tables, columns, and queries comply with Athena requirements. For more information, see [Names for Tables, Databases, and Columns](https://docs.amazonaws.cn/athena/latest/ug/tables-databases-columns-names.html) and [Troubleshooting](https://docs.amazonaws.cn/athena/latest/ug/troubleshooting.html) in the *Athena User Guide*.

# Workgroup and output errors when using Athena with Quick Sight


To verify that workgroups are set up properly, check the following settings:
+ **The Athena workgroup that's associated with the data source must exist. **

  To fix this, you can return to the Athena data source settings and choose a different workgroup. For more information, see [Setting Up Workgroups](https://docs.amazonaws.cn/athena/latest/ug/workgroups-procedure.html) in the *Athena User Guide*.

  Another solution is to have the Amazon Web Services account administrator recreate the workgroup in the Athena console. 
+ **The Athena workgroup that's associated with the data source must be enabled. **

  An Amazon Web Services account administrator needs to enable the workgroup in the Athena console. Open the Athena console by using this direct link: [https://console.amazonaws.cn/athena/](https://console.amazonaws.cn/athena/home). Then choose the appropriate workgroup in the **Workgroup** panel and view its settings. Choose **Enable workgroup**. 
+ **Make sure that you have access to the Amazon S3 output location that's associated with the Athena workgroup. **

  To grant Amazon Quick Sight permissions to access the S3 output location, the Amazon Quick Sight administrator can edit **Security & Permissions** in the **Manage QuickSight** screen. 
+ **The Athena workgroup must have an associated S3 output location. **

  An Amazon Web Services account administrator needs to associate an S3 bucket with the workgroup in the Athena console. Open the Athena console by using this direct link: [https://console.amazonaws.cn/athena/](https://console.amazonaws.cn/athena/home). Then choose the appropriate workgroup in the **Workgroup** panel and view its settings. Set **Query result location**. 