

# Data Preparation Experience (New)


Data preparation transforms raw data into a format optimized for analysis and visualization. In business intelligence, this crucial process involves cleaning, structuring, and enriching data to enable meaningful business insights.

Amazon Quick Sight's data preparation interface revolutionizes this process with an intuitive, visual experience that enables users to create analysis-ready datasets without SQL expertise. Through its modern, streamlined approach, users can efficiently create and manage business intelligence datasets. The visual interface presents a clear, sequential view of data transformations, allowing authors to track changes from the initial state to the final output with precision.

The platform emphasizes collaboration and reusability, enabling teams to share and repurpose workflows across the organization. This collaborative design promotes consistency in data transformation practices while eliminating redundant work, ultimately fostering standardized processes across teams and enhancing overall efficiency.

**Topics**
+ [

# Components within the data preparation experience
](data-prep-components.md)
+ [

# Data preparation steps
](data-prep-steps.md)
+ [

# Advanced workflow capabilities
](advanced-workflow-capabilities.md)
+ [

# SPICE-only features
](spice-only-features.md)
+ [

# Switching between data preparation experiences
](switching-between-data-prep-experiences.md)
+ [

# Features not supported in the new data preparation experience
](unsupported-features.md)
+ [

# Data preparation limits
](data-preparation-limits.md)
+ [

# Ingestion behavior changes
](ingestion-behavior-changes.md)
+ [

# Frequently asked questions
](new-data-prep-faqs.md)

# Components within the data preparation experience
Components

Amazon Quick Sight's data preparation experience has the following core components.

## Workflow


A workflow in Quick Sight's data preparation experience represents a sequential series of data transformation steps that guide your dataset from its raw state to an analysis-ready form. These workflows are designed for reusability, enabling analysts to leverage and build upon existing work while maintaining consistent data transformation standards throughout the organization.

While workflows can accommodate multiple paths through various Inputs or through Divergence (detailed in subsequent sections), they must ultimately converge into a single output table. This unified structure ensures data consistency and streamlined analysis capabilities.

## Transformation


A transformation is a specific data manipulation operation that changes the structure, format, or content of your data. Quick Sight's data preparation experience offers various transformation types including Join, Filter, Aggregate, Pivot, Unpivot, Append, and Calculated Columns. Each transformation type serves a distinct purpose in reshaping your data to meet analytical requirements. These transformations are implemented as individual steps within your workflow.

## Step


A step is a collection of homogeneous transformations of the same type applied within your workflow. Each step contains one or more related operations of the same transformation category. For example, a Rename step can include multiple column renaming operations, and a Filter step can contain multiple filtering conditions–all managed as a single unit in your workflow.

Most steps can include multiple operations, with two notable exceptions: Join and Append steps are limited to two input tables per step. To join or append more than two tables, you can create additional Join or Append steps in sequence.

Steps are displayed in order, with each step building upon the results of previous steps, allowing you to track the progressive transformation of your data. To rename or delete a step, select it and choose the three-dot menu.

## Connector


The connector links two steps with an arrow indicating the workflow direction. You can delete a connector by selecting it and pressing the delete key. To add a step between two existing steps, simply delete the connector, add the new step, and reconnect the steps by dragging your mouse between them.

## Configure pane


The **Configuation pane** is the interactive area where you define parameters and settings for a selected step. When you select a step in your workflow, this pane displays relevant options for that specific transformation type. For example, when configuring a Join step, you can select join type, matching columns, and other join-specific settings. The **Configuation pane**'s point-and-click interface eliminates the need for SQL knowledge.

## Preview pane


The **Preview pane** displays a real-time sample of your data as it appears after applying the current transformation step. This immediate visual feedback helps you verify that each transformation produces the expected results before proceeding to the next step. The **Preview pane** updates dynamically as you modify step configurations, enabling iterative refinement of data transformations with confidence.

These components work together to create an intuitive, visual data preparation experience that makes complex data transformations accessible to business users without requiring technical expertise.

# Data preparation steps


Amazon Quick Sight's data preparation experience offers eleven powerful step types that enable you to transform your data systematically. Each step serves a specific purpose in the data preparation workflow.

Steps can be configured through an intuitive interface in the **Configuation** pane, with immediate feedback visible in the **Preview** pane. Steps can be combined sequentially to create sophisticated data transformations without requiring SQL expertise.

Each step can receive input from either a physical table or the output of a previous step. Most steps accept a single input, with Append and Join steps being the exceptions–these require exactly two inputs.

## Input


The Input step initiates your data preparation workflow in Quick Sight by allowing you to select and import data from multiple sources for transformation in subsequent steps.

**Input options**
+ **Add Dataset**

  Leverage existing Quick Sight datasets as input sources, building upon data that has already been prepared and optimized by your team.
+ **Add Data Source**

  Connect directly to databases such as Amazon Redshift, Athena, RDS, or other supported sources by selecting specific database objects and providing connection parameters.
+ **Add File Upload**

  Import data directly from local files in formats such as CSV, TSV, Excel, or JSON.

**Configuration**

The Input step requires no configuration. The **Preview** pane displays your imported data along with source information, including connection details, table name, and column metadata.

**Usage notes**
+ Multiple Input steps can exist within a single workflow.
+ You can add Input steps at any point in your workflow.

## Add Calculated Columns


The Add Calculated Columns step enables you to create new columns using row-level expressions that perform calculations on existing columns. You can create new columns using scalar (row-level) functions and operators, and apply row-level calculations that reference existing columns.

**Configuration**

To configure the Add Calculated Columns step, in the **Configuration** pane:

1. Name your new calculated column.

1. Build expressions using the calculation editor, which supports row-level functions and operators (such as [ifelse](ifelse-function.md) and [round](round-function.md)).

1. Save your calculation.

1. Preview the expression results.

1. Add more calculated columns as needed.

**Usage notes**
+ Only scalar (row-level) calculations are supported in this step.
+ In SPICE, calculated columns are materialized and function as standard columns in subsequent steps.

## Change Data Type


Quick Sight simplifies data type management by supporting four abstract data types: `date`, `decimal`, `integer`, and `string`. These abstract types eliminate complexity by automatically mapping various source data types to their Quick Sight equivalents. For instance, `tinyint`, `smallint`, `integer`, and `bigint` are all mapped to `integer`, while `date`, `datetime`, and `timestamp` are mapped to `date`.

This abstraction means you only need to understand Quick Sight's four data types, as Quick Sight handles all underlying data type conversions and calculations automatically when interacting with different data sources.

**Configuration**

To configure the Change Data Type step, in the **Configuration** pane:

1. Select a column to convert.

1. Choose the target data type (`string`, `integer`, `decimal`, or `date`).

1. For date conversions, specify format settings and preview results based on input formats. See the [supported date formats](supported-data-types-and-values.md) in Quick Sight.

1. Add additional columns to convert as needed.

**Usage notes**
+ Convert multiple columns' data types in a single step for efficiency.
+ When using SPICE, all data type changes are materialized in the imported data.

## Rename Columns


The Rename Columns step enables you to modify column names to be more descriptive, user-friendly, and consistent with your organization's naming conventions.

**Configuration**

To configure the Rename Columns step, in the **Configuration** pane:

1. Select a column to name.

1. Enter a new name for the selected column.

1. Add more columns to rename as needed.

**Usage notes**
+ All column names must be unique within your dataset.

## Select Columns


The Select Columns step enables you to streamline your dataset by including, excluding, and reordering columns. This helps optimize your data structure by removing unnecessary columns and organizing the remaining ones in a logical sequence for analysis.

**Configuration**

To configure the Select Columns step, in the **Configuration** pane:

1. Choose specific columns to include in your output.

1. Select columns in your preferred order to establish sequence.

1. Use **Select All** to include remaining columns in their original order.

1. Exclude unwanted columns by leaving them unselected.

**Key Features**
+ Output columns appear in the order of selection.
+ **Select All** preserves the original column sequence.

**Usage notes**
+ Unselected columns are removed from subsequent steps.
+ Optimize dataset size by removing unnecessary columns.

## Append


The Append step vertically combines two tables, similar to a SQL UNION ALL operation. Quick Sight automatically matches columns by name rather than sequence, enabling efficient data consolidation even when tables have different column orders or varying numbers of columns.

**Configuration**

To configure the Append step, in the **Configuration** pane:

1. Select two input tables to append.

1. Review the output column sequence.

1. Examine which columns are present in both tables versus single tables.

**Key features**
+ Matches columns by name instead of sequence.
+ Retains all rows from both tables, including duplicates.
+ Supports tables with different numbers of columns.
+ Follows Table 1's column sequence for matching columns, then adds unique columns from Table 2.
+ Shows clear source indicators for all columns

**Usage notes**
+ Use a Rename step first when appending columns with different names.
+ Each Append step combines exactly two tables; use additional Append steps for more tables.

## Join


The Join step horizontally combines data from two tables based on matching values in specified columns. Quick Sight supports Left Outer, Right Outer, Full Outer, and Inner Join types, providing flexible options for your analytical needs. The step includes intelligent column conflict resolution that automatically handles duplicate column names. While self-joins aren't available as a specific join type, you can achieve similar results using workflow divergence.

**Configuration**

To configure the Join step, in the **Configuration** pane:

1. Select two input tables to join.

1. Choose your join type (Left Outer, Right Outer, Full Outer, or Inner).

1. Specify join keys from each table.

1. Review auto-resolved column name conflicts.

**Key features**
+ Supports multiple join types for different analytical needs.
+ Automatically resolves duplicate column names.
+ Accepts calculated columns as join keys.

**Usage notes**
+ Join keys must have compatible data types; use the Change Data Type step if needed.
+ Each Join step combines exactly two tables; use additional Join steps for more tables.
+ Create a Rename step after the Join to customize auto-resolved column headers.

## Aggregate


The Aggregate step enables you to summarize data by grouping columns and applying aggregation operations. This powerful transformation condenses detailed data into meaningful summaries based on your specified dimensions. Quick Sight simplifies complex SQL operations through an intuitive interface, offering comprehensive aggregation functions including advanced string operations like `ListAgg` and `ListAgg distinct`.

**Configuration**

To configure the Aggregate step, in the **Configuration** pane:

1. Select columns to group by.

1. Choose aggregation functions for measure columns.

1. Customize output column names.

1. For `ListAgg` and `ListAgg distinct`:

   1. Select the column to aggregate.

   1. Choose a separator (comma, dash, semicolon, or vertical line).

1. Preview the summarized data.

**Supported functions per data type**


| Data Type | Supported Functions | 
| --- | --- | 
|  Numeric  |  `Average`, `Sum` `Count`, `Count Distinct` `Max`, `Min`  | 
|  Date  |  `Count`, `Count Distinct` `Max`, `Min` `ListAgg`, `ListAgg distinct` (for date only)  | 
|  String  |  `ListAgg`, `ListAgg distinct` `Count`, `Count Distinct` `Max`, `Min`  | 

**Key features**
+ Applies different aggregation functions to columns within the same step.
+ **Group by** without aggregation functions acts as SQL SELECT DISTINCT.
+ `ListAgg` concatenates all values; `ListAgg distinct` includes only unique values.
+ `ListAgg` functions maintain ascending sort order by default.

**Usage notes**
+ Aggregation significantly reduces row count in your dataset.
+ `ListAgg` and `ListAgg distinct` support `date` values but not `datetime`.
+ Use separators to customize string concatenation output.

## Filter


The Filter step enables you to narrow down your dataset by including only rows that meet specific criteria. You can apply multiple filter conditions within a single step, all combining through `AND` logic to help focus your analysis on relevant data.

**Configuration**

To configure the Filter step, in the **Configuration** pane:

1. Select a column to filter.

1. Choose a comparison operator.

1. Specify filter values based on the column's data type.

1. Add additional filter conditions across different columns if needed.

**Note**  
String filters with "is in" or "is not in": Enter multiple values (one per line).
Numeric and date filters: Enter single values (except "between" which requires two values).

**Supported operators per data type**


| Data Type | Supported Operators | 
| --- | --- | 
|  Integer and Decimal  |  Equals, Does not equal Greater than, Less than Is greater than or equal to, Is less than or equal to Is between  | 
|  Date  |  After, Before Is between Is after or equal to, Is before or equal to  | 
|  String  |  Equals, Does not equal Starts with, Ends with Contains, Does not contain Is in, Is not in  | 

**Usage notes**
+ Apply multiple filter conditions in a single step.
+ Mix conditions across different data types.
+ Preview filtered results in real-time.

## Pivot


The Pivot step transforms row values into unique columns, converting data from a long format to a wide format for easier comparison and analysis. This transformation requires specifications for value filtering, aggregation, and grouping to manage the output columns effectively.

**Configuration**

To configure the Pivot step, use the following in the **Configuration** pane:

1. **Pivot column**: Select the column whose values will become column headers (e.g., Category).

1. **Pivot column row value**: Filter specific values to include (e.g., Technology, Office Supplies).

1. **Output column header**: Customize new column headers (defaults to pivot column values).

1. **Value column**: Select the column to aggregate (e.g., Sales).

1. **Aggregation function**: Choose the aggregation method (e.g., Sum).

1. **Group by**: Specify organizing columns (e.g., Segment).

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/pivot.png)


**Supported operators per data type**


| Data Type | Supported Operators | 
| --- | --- | 
|  Integer and Decimal  |  `Average`, `Sum` `Count`, `Count Distinct` `Max`, `Min`  | 
|  Date  |  `Count`, `Count Distinct` `Max`, `Min` `ListAgg`, `ListAgg distinct` (date values only)  | 
|  String  |  `ListAgg`, `ListAgg distinct` `Count`, `Count Distinct` `Max`, `Min`  | 

**Usage notes**
+ Each pivoted column contains aggregated values from the value column.
+ Customize column headers for clarity.
+ Preview transformation results in real-time.

## Unpivot


The Unpivot step transforms columns into rows, converting wide data into a longer, narrower format. This transformation helps organize data spread across multiple columns into a more structured format for easier analysis and visualization.

**Configuration**

To configure the Unpivot step, in the **Configuration** pane:

1. Select columns to unpivot into rows.

1. Define output column row values. The default is the original column name. Some examples include Technology, Office Supplies, and Furniture.

1. Name the two new outputs columns.
   + **Unpivoted column header**: The name for former column names (e.g., Category)
   + **Unpivoted column values**: The name for the unpivoted values (e.g., Sales)

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/unpivot.png)


**Key features**
+ Retains all non-unpivoted columns in the output.
+ Creates two new columns automatically: one for former column names and one for their corresponding values.
+ Transforms wide data into long format.

**Usage notes**
+ All unpivoted columns must have compatible data types.
+ Row count typically increases after unpivoting.
+ Preview changes in real-time before applying them.

# Advanced workflow capabilities


Amazon Quick Sight's data preparation experience offers sophisticated features that enhance your ability to create complex, reusable data transformations. This section covers two powerful capabilities that extend your workflow potential.

Divergence enables you to create multiple transformation paths from a single step, allowing parallel processing streams that can be recombined later. This capability is particularly valuable for complex scenarios like self-joins and parallel transformations.

Composite Datasets allow you to build hierarchical data structures by using existing datasets as building blocks. This feature promotes collaboration across teams and ensures consistent business logic through reusable, layered transformations.

These capabilities work together to provide flexible workflow designs, enhanced team collaboration, and reusable data transformations. They ensure clear data lineage and enable scalable data preparation solutions, empowering your organization to handle increasingly complex data scenarios with efficiency and clarity.

## Divergence


Divergence enables you to create multiple parallel transformation paths from a single step in your workflow. These paths can be transformed independently and later recombined, enabling complex data preparation scenarios such as self-joins.

**Creating divergent paths**

To initiate a Divergence, in your workflow:

1. Select the step where you want to create divergence.

1. Choose the **\$1** icon that appears.

1. Configure the new branch that appears.

1. Apply your desired transformations to each path.

1. Use Join or Append steps to recombine paths into a single output.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/divergence.png)


**Key features**
+ Creates up to five divergent paths from a single step.
+ Applies different transformations to each path.
+ Recombines paths using Join or Append steps.
+ Previews changes in each path independently.

**Best practices**
+ Use divergence for implementing self-joins.
+ Create data copies for parallel transformations.
+ Plan your recombination strategy (Join or Append).
+ Maintain clear path naming for better workflow visibility.

## Composite Datasets


Composite Datasets enable you to build upon existing datasets, creating hierarchical data transformation structures that can be shared and reused across your organization. Quick Sight supports up to 10 levels of composite datasets in both SPICE and Direct Query modes.

**Creating a composite dataset**

To create a composite dataset, in your workflow:

1. Select the Input step when creating a new dataset.

1. Choose **Dataset** as your source under **Add Data**.

1. Select an existing dataset to build upon.

1. Apply additional transformations as needed.

1. Save as a new dataset.

**Key features**
+ Builds hierarchical data transformation structures.
+ Supports for up to 10 levels of dataset nesting.
+ Compatible with both SPICE and Direct Query.
+ Maintains clear data lineage.
+ Enables team-specific transformations.

This feature enhances collaboration across different teams. For example,


| Role | Action | Output | 
| --- | --- | --- | 
|  Global Analyst  |  Creates dataset with global business logic  |  Dataset A  | 
|  Americas Analyst  |  Uses Dataset A, adds regional logic  |  Dataset B  | 
|  US-West Analyst  |  Uses Dataset B, adds local logic  |  Dataset C  | 

This hierarchical approach promotes consistent business logic across your organization by assigning clear ownership of transformation layers. It creates a traceable data lineage while supporting up to 10 levels of dataset nesting, enabling controlled and systematic data transformation management.

**Best practices**
+ Establish clear ownership for each transformation layer.
+ Document dataset relationships and dependencies.
+ Plan hierarchy depth based on business needs.
+ Maintain consistent naming conventions.
+ Review and update upstream datasets carefully.

# SPICE-only features


Amazon Quick Sight's SPICE (Super-fast, Parallel, In-memory Calculation Engine) enables certain computationally intensive data preparation features. These transformations are materialized in SPICE for optimal performance, rather than being executed at query time.

**SPICE-only features**


| Steps | Other capabilities | 
| --- | --- | 
|  [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/spice-only-features.html)  |  [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/spice-only-features.html)  | 

**Features available in both SPICE and DirectQuery**


| Steps | Other capabilities | 
| --- | --- | 
|  [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/spice-only-features.html)  |  [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/spice-only-features.html)  | 

**Best practices**
+ Use SPICE for workflows requiring SPICE-only features.
+ Choose SPICE to optimize performance for complex transformations and large datasets.
+ Consider DirectQuery for real-time data needs when SPICE-only features are not required.

# Switching between data preparation experiences


Legacy data preparation experience refers to the previous data preparation interface in Amazon Quick Sight that existed before October 2025. The new data preparation experience is the enhanced visual interface that shows step-by-step transformation sequences. Legacy datasets are those created before the new data preparation experience, while new datasets are those created after October 2025.

When creating a new dataset, Quick Sight automatically directs you to the new data preparation experience. This visual interface offers enhanced capabilities and improved usability for data transformation tasks.

## Opt-out option


Before saving and publishing a dataset, you have the option to switch back to the legacy data preparation experience, if preferred. This flexibility allows teams to transition at their own pace while becoming familiar with the new interface.

**Important**  
If a dataset is saved and published in the new experience, there will not be an option to go back to the legacy experience. This is by design, as the new experience has significant new features which are not supported in the legacy experience. Hence directly converting datasets from one experience to another is not supported. You will need to create a new dataset to switch to the legacy experience.

## Transition workflow


Once a dataset is saved in either the new or legacy experience, the transformations cannot be directly converted from one experience to another. However, if a published dataset version exists, you can use version control to go to the previous version which might be in the legacy experience.

Legacy datasets will continue to be accessible for viewing and editing exclusively through the legacy interface. This maintains compatibility with previously established workflows.

Before fully transitioning, take time to familiarize yourself with the new data preparation experience. When working with legacy datasets, consider creating a new version using the new experience for future modifications. Use version control to maintain access to legacy versions of datasets if needed. Document any changes in workflow when transitioning from legacy to new experience to ensure team alignment.

# Features not supported in the new data preparation experience
Unsupported features

While the new data preparation experience offers enhanced capabilities, some features from the legacy experience are not yet supported. This section outlines these features and provides guidance for handling affected workflows.

When using unsupported data sources, Amazon Quick Sight automatically defaults to the legacy experience. For other unsupported features, select **Switch to legacy experience** in the top right corner of the data preparation page. Rules Datasets created in the legacy experience remain compatible with both legacy and new experience datasets.

## Unsupported data sources


The following data sources are currently available only in the legacy experience.


| Data Source | Details | 
| --- | --- | 
|  Salesforce  |  Automatically defaults to legacy experience  | 
|  Google Sheets  |  Automatically defaults to legacy experience  | 
|  S3 Analytics  |  ** S3 data sources are supported**  | 

## Other unsupported features


The following features are currently available only in the legacy experience.


| Feature Category | Unsupported features | 
| --- | --- | 
|  Dataset Management  |  [Incremental refresh](refreshing-imported-data.md#refresh-spice-data-incremental), [Dataset parameters](dataset-parameters.md), [Column folders](organizing-fields-folder.md), [Column descriptions](describing-data.md)  | 
|  Data Types  |  [Geospatial](geospatial-data-prep.md), [ELF/CLF formats](supported-data-sources.md#file-data-sources), [Zip/GZip files in S3](supported-data-sources.md#file-data-sources)  | 
|  Configuration Options  |  ["Start from row" in file upload settings](choosing-file-upload-settings.md), JODA date format  | 
|  Parent dataset selection from legacy experience  |  Parent and child datasets must exist in the same experience environment. You cannot use a legacy experience dataset as a parent for a new experience dataset.  | 

## Future development


Amazon Quick Sight plans to implement these features in the new data preparation experience in the future. This approach ensures that the initial launch for the new data preparation experience prioritizes:

**Enhanced capabilities**
+ Visual transformation workflows
+ Improved process transparency
+ Advanced preparation techniques through Divergence
+ Powerful new features like Append, Aggregate, and Pivot

**Flexible adoption**

Users can choose between experiences before publishing datasets, ensuring uninterrupted workflows while teams transition at their own pace. This approach allows immediate access to new capabilities while maintaining support for specialized requirements through the legacy experience.

# Data preparation limits


Amazon Quick Sight's data preparation experience is designed to handle enterprise-scale datasets while maintaining optimal performance. The following limits ensure reliable functionality.

## Dataset size limits (SPICE)

+ **Output size**: Up to 2TB or 2 billion rows
+ **Total input size**: Combined input sources cannot exceed 2TB
+ **Secondary tables size**: Combined size is limited to 20GB

**Note**  
Primary tables are those with maxiumum size in a workflow; all others are secondary.

## Workflow structure limits

+ **Maximum steps**: Up to 256 transformation steps per workflow
+ **Source tables**: Maximum 32 import steps per workflow
+ **Output columns**: Up to 2048 columns at any step in the workflow and final output table with 2000 columns
+ **Divergent paths**: Maximum 5 paths from a single step (SPICE only, not applicable for DirectQuery)
+ **Dataset as a source**: Up to 10 levels for both SPICE and DirectQuery

These limits are designed to balance flexibility with performance, enabling complex data transformations while ensuring optimal analysis capabilities.

# Ingestion behavior changes


The new data preparation experience introduces an important change in how data quality issues are handled during SPICE ingestion. This change significantly impacts data completeness and transparency in your datasets.

In the legacy experience, when encountering data type inconsistencies (such as incorrect date formats or [similar issues](errors-spice-ingestion.md)), the entire row containing problematic cells is skipped during ingestion. This approach results in fewer rows in the final dataset, potentially obscuring data quality issues.

The new experience takes a more granular approach to data inconsistencies. When encountering problematic cells, only the inconsistent values are converted to null values while retaining the entire row. This preservation ensures that related data in other columns remains accessible for analysis.

**Impact on dataset quality**

Datasets created in the new experience will typically contain more rows than their legacy counterparts when the source data contains inconsistencies. This enhanced approach offers several benefits:
+ Improved data completeness by retaining all rows
+ Greater transparency in identifying data quality issues
+ Better visibility of problematic values for remediation
+ Preservation of related data in unaffected columns

This change enables analysts to identify and address data quality issues more effectively, rather than having problematic rows silently omitted from the dataset.

# Frequently asked questions


## 1. When do users need to switch from the new to legacy experience?


Users must return to the legacy experience when working with datasets that contain currently [unsupported features](unsupported-features.md). Quick Sight is actively working to incorporate these features into the new experience in upcoming releases.

## 2. Why are datasets grayed out when trying to add them in the new experience? Can datasets be combined between legacy and new experiences?


Currently, parent and child datasets must exist within the same experience environment. You cannot combine datasets across legacy and new experiences because the new experience includes additional features not available in legacy, such as Append functionalities, Pivot capabilities, and Divergence.

**Using parent datasets from the legacy experience**

To use parent datasets from the legacy experience, you can switch back to that environment. Simply navigate to the data preparation page and choose **Switch back to legacy experience** in the top right corner. Once there, you can create your child datasets as needed.

**Future development**

We are planning to implement functionality that will allow users to upgrade legacy datasets to the new experience. This upgraded pathway will enable the use of legacy parent datasets within the new experience.

## 3. Why is Quick Sight launching the new data preparation experience before achieving full feature parity with the legacy experience?


The new data preparation experience was developed through extensive customer collaboration to address real-world analytics challenges. The initial launch prioritizes:

**Enhanced capabilities**
+ Visual transformation workflows
+ Improved process transparency
+ Advanced preparation techniques through Divergence
+ Powerful new features like Append, Aggregate, and Pivot

**Flexible adoption**

Users can choose between experiences before publishing datasets, ensuring uninterrupted workflows while teams transition at their own pace. This approach allows immediate access to new capabilities while maintaining support for specialized requirements through the legacy experience.

## 4. Will features currently available only in the legacy experience be added to the new experience?


Yes. Quick Sight is actively working to incorporate legacy features into the new experience.

## 5. How do API changes affect existing dataset creation scripts?


Quick Sight maintains backwards compatibility while introducing new capabilities:
+ Existing Scripts: Legacy API scripts will continue to function, creating datasets in the legacy experience
+ API Naming: Current API names remain unchanged
+ New Functionality: Additional API formats support the new experience's enhanced capabilities
+ Documentation: Complete API specifications for the new experience are available in our API reference

## 6. Can datasets be converted between experiences after publication?

+ Future Migration Path: Quick Sight will add a feature in the future to easily migrate legacy datasets to the new experience.
+ One-Way Process: Converting datasets from the new experience to legacy format isn't supported due to advanced feature dependencies