

# Data Preparation Experience (Legacy)


**Topics**
+ [

# Adding calculations
](working-with-calculated-fields.md)
+ [

# Joining data
](joining-data.md)
+ [

# Preparing data fields for analysis in Amazon Quick Sight
](preparing-data-fields.md)
+ [

# Filtering data in Amazon Quick Sight
](adding-a-filter.md)
+ [

# Previewing tables in a dataset
](previewing-tables-in-a-dataset.md)

# Adding calculations


Create calculated fields to transform your data by using one or more of the following: 
+ [Operators](arithmetic-and-comparison-operators.md)
+ [Functions](functions.md)
+ Fields that contain data
+ Other calculated fields

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. For more information about adding calculated fields, see the following topics.

**Topics**
+ [

# Adding calculated fields
](adding-a-calculated-field-analysis.md)
+ [

# Order of evaluation in Amazon Quick Sight
](order-of-evaluation-quicksight.md)
+ [

# Using level-aware calculations in Quick Sight
](level-aware-calculations.md)
+ [

# Calculated field function and operator reference for Amazon Quick
](calculated-field-reference.md)

# Adding calculated fields


Create calculated fields to transform your data by using one or more of the following: 
+ [Operators](arithmetic-and-comparison-operators.md)
+ [Functions](functions.md)
+ Aggregate functions (you can only add these to an analysis)
+ Fields that contain data
+ Other calculated fields

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. 

Analyses support both single-row operations and aggregate operations. Single-row operations are those that supply a (potentially) different result for every row. Aggregate operations supply results that are always the same for entire sets of rows. For example, if you use a simple string function with no conditions, it changes every row. If you use an aggregate function, it applies to all the rows in a group. If you ask for the total sales amount for the US, the same number applies to the entire set. If you ask for data on a particular state, the total sales amount changes to reflect your new grouping. It still provides one result for the entire set.

By creating the aggregated calculated field within the analysis, you can then drill down into the data. The value of that aggregated field is recalculated appropriately for each level. This type of aggregation isn't possible during dataset preparation.

For example, let's say that you want to figure out the percentage of profit for each country, region, and state. You can add a calculated field to your analysis, `(sum(salesAmount - cost)) / sum(salesAmount)`. This field is then calculated for each country, region, and state, at the time your analyst drills down into the geography.

**Topics**
+ [

## Adding calculated fields to an analysis
](#using-the-calculated-field-editor-analysis)
+ [

## Adding calculated fields to a dataset
](#using-the-calculated-field-editor)
+ [

## Handling decimal values in calculated fields
](#handling-decimal-fields)

## Adding calculated fields to an analysis


When you add a dataset to an analysis, every calculated field that exists in the dataset is added to the analysis. You can add additional calculated fields at the analysis level to create calculated fields that are available only in that analysis.

**To add a calculated field to an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. Open the analysis that you want to change.

1. In the **Data** pane, choose **Add** at top left, and then choose **\$1 CALCULATED FIELD**.

   1. In the calculations editor that opens, do the following:

   1. Enter a name for the calculated field.

   1. Enter a formula using fields from your dataset, functions, and operators.

1. When finished, choose **Save**.

For more information about how to create formulas using the available functions in Quick Sight, see [Calculated field function and operator reference for Amazon QuickFunctions and operators](calculated-field-reference.md).

## Adding calculated fields to a dataset


Amazon Quick Sight authors can genreate calculated fields during the data preparation phase of a dataset's creation. When you create a calculated field for a dataset, the field becomes a new column in the dataset. All analyses that use the dataset inherit the dataset's calculated fields.

If the calculated field operates at the row level and the dataset is stored in SPICE, Quick Sight computes and materializes the result in SPICE. If the calculated field relies on an aggregation function, Quick Sight retains the formula and performs the calculation when the analysis is generated. This type of calculated field is called an unmaterialized calculated field.

**To add or edit a calculated field for a dataset**

1. Open the dataset that you want to work with. For more information, see [Editing datasets](edit-a-data-set.md).

1. On the data prep page, do one of the following:
   + To create a new field, choose **Add calculated field** at left.
   + To edit an existing calculated field, choose it from **Calculated fields** at left, then choose **Edit** from the context (right-click) menu.

1. In the calculation editor, enter a descriptive name for **Add title** to name the new calculated field. This name appears in the field list in the dataset, so it should look similar to the other fields. For this example, we name the field `Total Sales This Year`.

1. (Optional) Add a comment, for example to explain what the expression does, by enclosing text in slashes and asterisks.

   ```
   /* Calculates sales per year for this year*/
   ```

1. Identify the metrics, functions, and other items to use. For this example, we need to identify the following:
   + The metric to use
   + Functions: `ifelse` and `datediff`

   We want to build a statement like "If the sale happened during this year, show the total sales, and otherwise show 0."

   To add the `ifelse` function, open the **Functions** list. Choose **All** to close the list of all functions. Now you should see the function groups: **Aggregate**, **Conditional**, **Date**, and so on. 

   Choose **Conditional**, and then double-click on `ifelse` to add it to the workspace. 

   ```
   ifelse()
   ```

1. Place your cursor inside the parenthesis in the workspace, and add three blank lines.

   ```
   ifelse(
                                               
                                               
                                               
   )
   ```

1. With your cursor on the first blank line, find the `dateDiff` function. It's listed for **Functions** under **Dates**. You can also find it by entering **date** for **Search functions**. The `dateDiff` function returns all functions that have *`date`* as part of their name. It doesn't return all functions listed under **Dates**; for example, the `now` function is missing from the search results.

   Double-click on `dateDiff` to add it to the first blank line of the `ifelse` statement. 

   ```
   ifelse(
   dateDiff()                                            
                                               
                                               
   )
   ```

   Add the parameters that `dateDiff` uses. Place your cursor inside the `dateDiff` parentheses to begin to add `date1`, `date2`, and `period`:

   1. For `date1`: The first parameter is the field that has the date in it. Find it under **Fields**, and add it to the workspace by double-clicking it or entering its name. 

   1. For `date2`, add a comma, then choose `truncDate()` for **Functions**. Inside its parenthesis, add period and date, like this: **truncDate( "YYYY", now() )**

   1. For `period`: Add a comma after `date2` and enter **YYYY**. This is the period for the year. To see a list of all the supported periods, find `dateDiff` in the **Functions** list, and open the documentation by choosing **Learn more**. If you're already viewing the documentation, as you are now, see [dateDiff](dateDiff-function.md).

   Add a few spaces for readability, if you like. Your expression should look like the following.

   ```
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" )                                       
                                               
                                               
   )
   ```

1. Specify the return value. For our example, the first parameter in `ifelse` needs to return a value of `TRUE` or `FALSE`. Because we want the current year, and we're comparing it to this year, we specify that the `dateDiff` statement should return `0`. The `if` part of the `ifelse` evaluates as true for rows where there is no difference between the year of the sale and the current year.

   ```
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 
   ```

   To create a field for `TotalSales` for last year, you can change `0` to `1`.

   Another way to do the same thing is to use `addDateTime` instead of `truncDate`. Then for each previous year, you change the first parameter for `addDateTime` to represent each year. For this, you use `-1` for last year, `-2` for the year before that, and so on. If you use `addDateTime`, you leave the `dateDiff` function `= 0` for each year.

   ```
      dateDiff( {Discharge Date}, addDateTime(-1, "YYYY", now() ) ,"YYYY" ) = 0 /* Last year */
   ```

1. Move your cursor to the first blank line, just under `dateDiff`. Add a comma. 

   For the `then` part of the `ifelse` statement, we need to choose the measure (metric) that contains the sales amount, `TotalSales`.

   To choose a field, open the **Fields** list and double-click a field to add it to the screen. Or you can enter the name. Add curly braces `{ }` around names that contain spaces. It's likely that your metric has a different name. You can know which field is a metric by the number sign in front of it (**\$1**).

   Your expression should look like the following now.

   ```
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0
      ,{TotalSales}                            
                                              
   )
   ```

1. Add an `else` clause. The `ifelse` function doesn't require one, but we want to add it. For reporting purposes, you usually don't want to have any null values, because sometimes rows with nulls are omitted. 

   We set the else part of the ifelse to `0`. The result is that this field is `0` for rows that contain sales from previous years.

   To do this, on the blank line add a comma and then a `0`. If you added the comment at the beginning, your finished `ifelse` expression should look like the following.

   ```
   /* Calculates sales per year for this year*/
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0
      ,{TotalSales}                            
      ,0                                         
   )
   ```

1. Save your work by choosing **Save** at upper right. 

   If there are errors in your expression, the editor displays an error message at the bottom. Check your expression for a red squiggly line, then hover your cursor over that line to see what the error message is. Common errors include missing punctuation, missing parameters, misspellings, and invalid data types.

   To avoid making any changes, choose **Cancel**.

**To add a parameter value to a calculated field**

1. You can reference parameters in calculated fields. By adding the parameter to your expression, you add the current value of that parameter.

1. To add a parameter, open the **Parameters** list, and select the parameter whose value you want to include. 

1. (Optional) To manually add a parameter to the expression, type the name of the parameter. Then enclosed it in curly braces `{}`, and prefix it with a `$`, for example `${parameterName}`.

You can change the data type of any field in your dataset, including the types of calculated fields. You can only choose data types that match the data that's in the field.

**To change the data type of a calculated field**
+ For **Calculated fields** (at left), choose the field that you want to change, then choose **Change data type** from the context (right-click) menu.

Unlike the other fields in the dataset, calculated fields can't be disabled. Instead, delete them. 

**To delete a calculated field**
+ For **Calculated fields** (at left), choose the field that you want to change, then choose **Delete** from the context (right-click) menu.

## Handling decimal values in calculated fields


When your dataset uses Direct Query mode, the calculation of the decimal data type is determined by the behavior of the source engine that the dataset originates from. In some particular cases, Quick Sight applies special handlings to determine the output calculation's data type.

When your dataset uses SPICE query mode and a calculated field is materialized, the data type of the result is contingent on the specific function operators and the data type of the input. The tables below show the expected bahavior for some numeric calculated fields.

**Unary operators**

The following table shows which data type is output based on the operator you use and the data type of the value that you input. For example, if you input an integer to an `abs` calculation, the output value's data type is integer.


****  
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/adding-a-calculated-field-analysis.html)

**Binary operators**

The following tables show which data type is output based on the data types of the two values that you input. For example, for an arithmetic operator, if you provide two integer data types, the result of the calculation output as an integer.

For basic operators (\$1, -, \$1):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Integer  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-fixed  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

For division operators (/):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-float  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

For exponential and mod operators (^, %):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

# Order of evaluation in Amazon Quick Sight
Order of evaluation

When you open or update an analysis, before displaying it Amazon Quick Sight evaluates everything that is configured in the analysis in a specific sequence. Amazon Quick Sight translates the configuration into a query that a database engine can run. The query returns the data in a similar way whether you connect to a database, a software as a service (SaaS) source, or the Amazon Quick Sight analytics engine ([SPICE](spice.md)). 

If you understand the order that the configuration is evaluated in, you know the sequence that dictates when a specific filter or calculation is applied to your data.

The following illustration shows the order of evaluation. The column on the left shows the order of evaluation when no level aware calculation window (LAC-W) nor aggregate (LAC-A) function is involved. The second column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the prefilter (`PRE_FILTER`) level. The third column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the preaggregate (`PRE_AGG`) level. The last column shows the order of evaluation for analyses that contain calculated fields to compute LAC-A expressions. Following the illustration, there is a more detailed explanation of the order of evaluation. For more information about level aware calculations, see [Using level-aware calculations in Quick Sight](level-aware-calculations.md).

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/order-of-evaluation2.png)


The following list shows the sequence in which Amazon Quick Sight applies the configuration in your analysis. Anything that's set up in your data set happens outside your analysis, for example calculations at the data set level, filters, and security settings. These all apply to the underlying data. The following list only covers what happens inside the analysis. 

1. **LAC-W Prefilter level**: Evaluates the data at the original table cardinality before analysis filters

   1. **Simple calculations**: Calculations at scalar level without any aggregations or window calculations. For example, `date_metric/60, parseDate(date, 'yyyy/MM/dd'), ifelse(metric > 0, metric, 0), split(string_column, '|' 0)`.

   1. **LAC-W function PRE\$1FILTER**: If any LAC-W PRE\$1FILTER expression is involved in the visual, Amazon Quick Sight firstly computes the window function at the original table level, before any filters. If the LAC-W PRE\$1FILTER expression is used in filters, it is applied at this point. For example, `maxOver(Population, [State, County], PRE_FILTER) > 1000`.

1. **LAC-W PRE\$1AGG**: Evaluates the data at the original table cardinality before aggregations

   1. **Filters added during analysis**: Filters created for un-aggregated fields in the visuals are applied at this point, which are similar to WHERE clauses. For example, `year > 2020`.

   1. **LAC-W function PRE\$1AGG**: If any LAC-W PRE\$1AGG expression is involved in the visual, Amazon Quick Sight computes the window function before any aggregation applied. If the LAC-W PRE\$1AGG expression is used in filters, it is applied at this point. For example, `maxOver(Population, [State, County], PRE_AGG) > 1000`.

   1. **Top/bottom N filters**: Filters that are configured on dimensions to display top/bottom N items.

1. **LAC-A level**: Evaluate aggregations at customized level, before visual aggregations

   1. **Custom-level aggregations**: If any LAC-A expression is involved in the visual, it is calculated at this point. Based on the table after the filters mentioned above, Amazon QuickSight computes the aggregation, grouped by the dimensions that are specified in the calculated fields. For example, `max(Sales, [Region])`.

1. **Visual level**: Evaluates aggregations at visual level, and post-aggregation table calculations, with the remaining configurations applied in the visuals

   1. **Visual-level aggregations**: Visual aggregations should always be applied except for tabular tables (where dimension is empty). With this setting, aggregations based on the fields in the field wells are calculated, grouped by the dimensions that put into the visuals. If any filter is built on top of aggregations, it is applied at this point, similar to HAVING clauses. For example, `min(distance) > 100`.

   1. **Table calculations**: If there is any post-aggregation table calculation (it should take aggregated expression as operand) referenced in the visual, it is calculated at this point. Amazon Quick Sight performs window calculations after visual aggregations. Similarly, filters built on such calculations are applied.

   1. **Other category calculations**: This type of calculation only exists in line/bar/pie/donut charts. For more information, see [Display limits](working-with-visual-types.md#display-limits).

   1. **Totals and subtotals**: Totals and Subtotals are calculated in donut charts (only totals), tables (only totals) and pivot tables, if requested.

# Using level-aware calculations in Quick Sight
Level-aware calculations


|  | 
| --- |
|    Applies to: Enterprise Edition and Standard Edition  | 

With *Level-aware calculations* (LAC) you can specify the level of granularity that you want to compute window functions or aggregate functions. There are two types of LAC functions: level-aware calculation - aggregate (LAC-A) functions, and level-aware calculation - window (LAC-W) functions.

**Topics**
+ [LAC-A functions](#level-aware-calculations-aggregate)
+ [LAC-W functions](#level-aware-calculations-window)

## Level-aware calculation - aggregate (LAC-A) functions
LAC-A functions

With LAC-A functions, you can specify at what level to group the computation. By adding one argument into an existing aggregate function, such as `sum() , max() , count()`, you can define any group-by level that you want for the aggregation. The level added can be any dimension independent of the dimensions added to the visual. For example:

```
sum(measure,[group_field_A])
```

To use LAC-A functions, type them directly in the calculation editor by adding the intended aggregation levels as the second argument between brackets. Following is an example of an aggregate function and a LAC-A function, for comparison.
+ Aggregate function: `sum({sales})`
+ LAC-A function: `sum({sales}, [{Country},{Product}])`

The LAC-A results are computed with the specified level in the brackets `[ ]`, can be used as operand of an aggregate function. The group-by level of the aggregate function is visual level, with **Group by** fields added to the field well of the visual. 

In addition to creating a static LAC group key in the bracket `[ ]`, you can make it dynamically adapted to visual group-by fields, by putting a parameter `$visualDimensions` in the bracket. This is a system-provided parameter, in contrast to user-defined parameter. The `[$visualDimensions]`parameter represents the fields added to the **Group by** field well in current visual. The following examples show how to dynamically add group keys to the visual dimensions or remove group keys from visual dimensions
+ LAC-A with dynamic-added group key : `sum({sales}, [${visualDimensions},{Country},{Products}])`

  It calculates, before the visual level aggregation is calculated, the sum of sales, grouping by `country`, `products`, and any other fields in the **Group by** field well . 
+ LAC-A with dynamic-removed group key : `sum({sales}, [${visualDimensions},!{Country},!{Products}])` 

  It calculates, before visual level aggregation is calculated, the sum of sales, grouping by the fields in the visual's **Group by** field well, except `country` and `product`. 

You can specify added group key or removed group key in on LAC expression, but not both.

LAC-A functions are supported for the following aggregate functions:
+ [avg](avg-function.md)
+ [count](count-function.md)
+ [distinct\$1count](distinct_count-function.md)
+ [max](max-function.md)
+ [median](median-function.md)
+ [min](min-function.md)
+ [percentile](percentile-function.md)
+ [percentileCont](percentileCont-function.md)
+ [percentileDisc (percentile)](percentileDisc-function.md)
+ [stdev](stdev-function.md)
+ [stdevp](stdevp-function.md)
+ [sum](sum-function.md)
+ [var](var-function.md)
+ [varp](varp-function.md)

### LAC-A examples


You can do the following with LAC-A functions:
+ Run calculations that are independent of the levels in the visual. For example, if you have the following calculation, the sales numbers are aggregated only at the country level, but not across other dimensions (Region or Product) in the visual.

  ```
  sum({Sales},[{Country}])
  ```
+ Run calculations for the dimensions that are not in the visual. For example, if you have the following function, you can calculate the average total country sales by region.

  ```
  sum({Sales},[{Country}])
  ```

  Though Country is not included in the visual, the LAC-A function first aggregates the sales at the Country level and then the visual level calculation generates the average number for each region. If the LAC-A function isn't used to specify the level, the average sales are calculated at the lowest granular level (the base level of the dataset) for each region (showing in the sales column).
+ Use LAC-A combined with other aggregate functions and LAC-W functions. There are two ways you can nest LAC-A functions with other functions.
  + You can write a nested syntax when you create a calculation. For example, the LAC-A function can be nested with a LAC-W function to calculate the total sales by country of each product's average price:

    ```
    sum(avgOver({Sales},[{Product}],PRE_AGG),[{Country}])
    ```
  + When adding a LAC-A function into a visual, the calculation can be further nested with visual-level aggregate functions that you selected in the fields well. For more information about changing the aggregation of fields in the visual, see [Changing or adding aggregation to a field by using a field well](changing-field-aggregation.md#change-field-aggregation-field-wells).

### LAC-A limitations


The following limitations apply to LAC-A functions:
+ LAC-A functions are supported for all additive and non-additive aggregate functions, such as `sum()`, `count()`, and `percentile()`. LAC-A functions are not supported for conditional aggregate functions that end with "if", such as `sumif()` and `countif()`, nor for period aggregate functions that start with "periodToDate", such as `periodToDateSum()` and `periodToDateMax()`.
+ Row-level and column-level totals are not currently supported for for LAC-A functions in tables and pivot tables. When you add row-level or column-level totals to the chart, the total number will show as blank. Other non-LAC dimensions are not impacted.
+ Nested LAC-A functions are not currently supported. A limited capability of LAC-A functions nested with regular aggregate functions and LAC-W functions are supported.

  For example, the following functions are valid:
  + `Aggregation(LAC-A())`. For example: `max(sum({sales}, [{country}]))`
  + `LAC-A(LAC-W())`. For example: `sum(sumOver({Sales},[{Product}],PRE_AGG), [{Country}])`

  The following functions are not valid:
  + `LAC-A(Aggregation())`. For example: `sum(max({sales}), [{country}])`
  + `LAC-A(LAC-A())`. For example: `sum(max({sales}, [{country}]),[category])`
  + `LAC-W(LAC-A())`. For example: `sumOver(sum({Sales},[{Product}]),[{Country}],PRE_AGG)`

## Level-aware calculation - window (LAC-W) functions
LAC-W functions

With LAC-W functions, you can specify the window or partition to compute the calculation. LAC-W functions are a group of window functions, such as `sumover()`, `(maxover)`, `denseRank`, that you can run at the prefilter or preaggregate level. For example: `sumOver(measure,[partition_field_A],pre_agg)`.

LAC-W functions used to be called level aware aggregations (LAA).

LAC-W functions help you to answer the following types of questions:
+ How many of my customers made only 1 purchase order? Or 10? Or 50? We want the visual to use the count as a dimension rather than a metric in the visual.
+ What are the total sales per market segment for customers whose lifetime spend is greater than \$1100,000? The visual should only show the market segment and the total sales for each.
+ How much is the contribution of each industry to the entire company's profit (percent of total)? We want to be able to filter the visual to show some of the industries, and how they contribute to the total sales for the displayed industries. However, we also want to see each industry's percent of total sales for the entire company (including industries that are filtered out). 
+ What are the total sales in each category as compared to the industry average? The industry average should include all of the categories, even after filtering.
+ How are my customers grouped into cumulative spending ranges? We want to use the grouping as a dimension rather than a metric. 

For more complex questions, you can inject a calculation or filter before Quick Sight gets to a specific point in its evaluation of your settings. To directly influence your results, you add a calculation level keyword to a table calculation. For more information on how Quick Sight evaluates queries, see [Order of evaluation in Amazon Quick Sight](order-of-evaluation-quicksight.md).

The following calculation levels are supported for LAC-W functions:
+ **`PRE_FILTER`** – Before applying filters from the analysis, Quick Sight evaluates prefilter calculations. Then it applies any filters that are configured on these prefilter calculations.
+ **`PRE_AGG`** – Before computing display-level aggregations, Quick Sight performs preaggregate calculations. Then it applies any filters that are configured on these preaggregate calculations. This work happens before applying top and bottom *N* filters.

You can use the `PRE_FILTER` or `PRE_AGG` keyword as a parameter in the following table calculation functions. When you specify a calculation level, you use an unaggregated measure in the function. For example, you can use `countOver({ORDER ID}, [{Customer ID}], PRE_AGG)`. By using `PRE_AGG`, you specify that the `countOver` executes at the preaggregate level. 
+ [avgOver](avgOver-function.md)
+ [countOver](countOver-function.md)
+ [denseRank](denseRank-function.md)
+ [distinctCountOver](distinctCountOver-function.md)
+ [minOver](minOver-function.md)
+ [maxOver](maxOver-function.md)
+ [percentileRank](percentileRank-function.md)
+ [rank](rank-function.md)
+ [stdevOver](stdevOver-function.md)
+ [stdevpOver](stdevpOver-function.md)
+ [sumOver](sumOver-function.md)
+ [varOver](varOver-function.md)
+ [varpOver](varpOver-function.md)

By default, the first parameter for each function must be an aggregated measure. If you use either `PRE_FILTER` or `PRE_AGG`, you use a nonaggregated measure for the first parameter. 

For LAC-W functions, the visual aggregation defaults to `MIN` to eliminate duplicates. To change the aggregation, open the field's context (right-click) menu, and then choose a different aggregation.

For examples of when and how to use LAC-W functions in real life scenarios, see the following post in the Amazon Big Data Blog: [Create advanced insights using Level Aware Aggregations in Amazon QuickSight.](https://aws.amazon.com/jp/blogs/big-data/create-advanced-insights-using-level-aware-aggregations-in-amazon-quicksight/) 

# Calculated field function and operator reference for Amazon Quick
Functions and operators

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. 

You can create calculated fields to transform your data by using the following functions and operators.

**Topics**
+ [

# Operators
](arithmetic-and-comparison-operators.md)
+ [

# Functions by category
](functions-by-category.md)
+ [

# Functions
](functions.md)
+ [

# Aggregate functions
](calculated-field-aggregations.md)
+ [

# Table calculation functions
](table-calculation-functions.md)

# Operators


You can use the following operators in calculated fields. Quick uses the standard order of operations: parentheses, exponents, multiplication, division, addition, subtraction (PEMDAS). Equal (=) and not equal (<>) comparisons are case-sensitive. 
+ Addition (\$1)
+ Subtraction (−)
+ Multiplication (\$1)
+ Division (/)
+ Modulo (%) – See also `mod()` in the following list.
+ Power (^) – See also `exp()` in the following list.
+ Equal (=)
+ Not equal (<>)
+ Greater than (>)
+ Greater than or equal to (>=)
+ Less than (<)
+ Less than or equal to (<=)
+ AND
+ OR
+ NOT

Amazon Quick supports applying the following mathematical functions to an expression.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html](https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html)(number, divisor)` – Finds the remainder after dividing a number by a divisor.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/log-function.html](https://docs.amazonaws.cn/quicksight/latest/user/log-function.html)(expression) `– Returns the base 10 logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html)(expression) `– Returns the natural logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html](https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html)(expression) `– Returns the absolute value of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html)(expression) `– Returns the square root of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html)(expression) `– Returns the base of natural log *e* raised to the power of a given expression. 

To make lengthy calculations easier to read, you can use parenthesis to clarify groupings and precedence in calculations. In the following statement, you don't need parentheses. The multiplication statement is processed first, and then the result is added to five, returning a value of 26. However, parentheses make the statement easier to read and thus maintain.

```
5 + (7 * 3)
```

Because parenthesis are first in the order of operations, you can change the order in which other operators are applied. For example, in the following statement the addition statement is processed first, and then the result is multiplied by three, returning a value of 36.

```
(5 + 7) * 3
```

## Example: Arithmetic operators
Multiple operators

The following example uses multiple arithmetic operators to determine a sales total after discount.

```
(Quantity * Amount) - Discount
```

## Example: (/) Division
/ (Division)

The following example uses division to divide 3 by 2. A value of 1.5 is returned. Amazon Quick uses floating point divisions.

```
3/2
```

## Example: (=) equal
= (equal)

Using = performs a case-sensitive comparison of values. Rows where the comparison is TRUE are included in the result set. 

In the following example, rows where the `Region` field is **South** are included in the results. If the `Region` is **south**, these rows are excluded.

```
Region = 'South'
```

In the following example, the comparison evaluates to FALSE. 

```
Region = 'south'
```

The following example shows a comparison that converts `Region` to all uppercase (**SOUTH**), and compares it to **SOUTH**. This returns rows where the region is **south**, **South**, or **SOUTH**.

```
toUpper(Region) = 'SOUTH'
```

## Example: (<>)
<> (not equal)

The not equal symbol <> means *less than or greater than*. 

So, if we say **x<>1**, then we are saying *if x is less than 1 OR if x is greater than 1*. Both < and > are evaluated together. In other words, *if x is any value except 1*. Or, *x is not equal to 1*. 

**Note**  
Use <>, not \$1=.

The following example compares `Status Code` to a numeric value. This returns rows where the `Status Code` is not equal to **1**.

```
statusCode <> 1
```

The following example compares multiple `statusCode` values. In this case, active records have `activeFlag = 1`. This example returns rows where one of the following applies:
+ For active records, show rows where the status isn't 1 or 2
+ For inactive records, show rows where the status is 99 or -1

```
( activeFlag = 1 AND (statusCode <> 1 AND statusCode <> 2) )
OR
( activeFlag = 0 AND (statusCode= 99 OR statusCode= -1) )
```

## Example: (^)
^ (Power)

The power symbol `^` means *to the power of*. You can use the power operator with any numeric field, with any valid exponent. 

The following example is a simple expression of 2 to the power of 4 or (2 \$1 2 \$1 2 \$1 2). This returns a value of 16.

```
2^4
```

The following example computes the square root of the revenue field.

```
revenue^0.5
```

## Example: AND, OR, and NOT
Use the AND, OR, and NOT operators to refine your selection criteria. these operators are helpful when you need to show relationships between different comparisons.

The following example uses AND, OR, and NOT to compare multiple expressions. It does so using conditional operators to tag top customers NOT in Washington or Oregon with a special promotion, who made more than 10 orders. If no values are returned, the value 'n/a' is used.

```
ifelse(( (NOT (State = 'WA' OR State = 'OR')) AND Orders > 10), 'Special Promotion XYZ', 'n/a')
```

## Example: Creating comparison lists like "in" or "not in"
in/not in

This example uses operators to create a comparison to find values that exist, or don't exist, in a specified list of values.

The following example compares `promoCode` a specified list of values. This example returns rows where the `promoCode` is in the list **(1, 2, 3)**.

```
promoCode    = 1
OR promoCode = 2
OR promoCode = 3
```

The following example compares `promoCode` a specified list of values. This example returns rows where the `promoCode` is NOT in the list **(1, 2, 3)**.

```
NOT(promoCode = 1
OR promoCode  = 2
OR promoCode  = 3
)
```

Another way to express this is to provide a list where the `promoCode` is not equal to any items in the list.

```
promoCode     <> 1
AND promoCode <> 2
AND promoCode <> 3
```

## Example: Creating a "between" comparison
Between

This example uses comparison operators to create a comparison showing values that exist between one value and another.

The following example examines `OrderDate` and returns rows where the `OrderDate` is between the first day and last day of 2016. In this case, we want the first and last day included, so we use "or equal to" on the comparison operators. 

```
OrderDate >= "1/1/2016" AND OrderDate <= "12/31/2016"
```

# Functions by category


In this section, you can find a list of the functions available in Amazon Quick, sorted by category.

**Topics**
+ [

## Aggregate functions
](#aggregate-functions)
+ [

## Conditional functions
](#conditional-functions)
+ [

## Date functions
](#date-functions)
+ [

## Numeric functions
](#numeric-functions)
+ [

## Mathematical functions
](#mathematical-functions)
+ [

## String functions
](#string-functions)
+ [

## Table calculations
](#table-calculations)

## Aggregate functions


The aggregate functions for calculated fields in Amazon Quick include the following. These are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/avg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avg-function.html) averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html) calculates the average based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/count-function.html](https://docs.amazonaws.cn/quicksight/latest/user/count-function.html) calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html) calculates the count based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinct_count-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinct_count-function.html) calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html) calculates the distinct count based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/max-function.html](https://docs.amazonaws.cn/quicksight/latest/user/max-function.html) returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html) calculates the maximum based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/median-function.html](https://docs.amazonaws.cn/quicksight/latest/user/median-function.html) returns the median value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html) calculates the median based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/min-function.html](https://docs.amazonaws.cn/quicksight/latest/user/min-function.html) returns the minimum value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html) calculates the minimum based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentile-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentile-function.html) (alias of `percentileDisc`) computes the *n*th percentile of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html) calculates the *n*th percentile based on a continuous distribution of the numbers of the specified measure, grouped by the chosen dimension or dimensions. 
+ [percentileDisc (percentile)](https://docs.amazonaws.cn/quicksight/latest/user/percentileDisc-function.html) calculates the *n*th percentile based on the actual numbers of the specified measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvg-function.html) averages the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCount-function.html) calculates the number of values in a dimension or measure for a given time granularity (for instance, Quarter) up to a point in time including duplicates.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMax-function.html) returns the maximum value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMedian-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMedian-function.html) returns the median value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMin-function.html) returns the minimum value of the specified measure or date for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentile-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentile-function.html) calculates the percentile based on the actual numbers in measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentileCont-function.html) calculates percentile based on a continuous distribution of the numbers in the measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDev-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDev-function.html) calculates the standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDevP-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDevP-function.html) calculates the population standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSum-function.html) adds the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVar-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVar-function.html) calculates the sample variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVarP-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVarP-function.html) calculates the population variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdev-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdev-function.html)) calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html) calculates the sample standard deviation based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevp-function.html) calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html) calculates the population deviation based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/var-function.html](https://docs.amazonaws.cn/quicksight/latest/user/var-function.html)) calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html) calculates the sample variance based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varp-function.html)) calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html) calculates the population variance based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/sum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sum-function.html)) adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html)) calculates the sum based on a conditional statement.

## Conditional functions


The conditional functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/coalesce-function.html](https://docs.amazonaws.cn/quicksight/latest/user/coalesce-function.html) returns the value of the first argument that is not null.
+ [https://docs.amazonaws.cn/quicksight/latest/user/ifelse-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ifelse-function.html) evaluates a set of *if*, *then* expression pairings, and returns the value of the *then* argument for the first *if* argument that evaluates to true.
+ [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) evaluates an expression to see if it is in a given list of values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/isNotNull-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isNotNull-function.html) evaluates an expression to see if it is not null.
+ [https://docs.amazonaws.cn/quicksight/latest/user/isNull-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isNull-function.html) evaluates an expression to see if it is null. If the expression is null, `isNull` returns true, and otherwise it returns false.
+ [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) evaluates an expression to see if it is not in a given list of values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/nullIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/nullIf-function.html) compares two expressions. If they are equal, the function returns null. If they are not equal, the function returns the first expression.
+ [https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html](https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html) returns an expression that matches the first label equal to the condition expression.

## Date functions


The date functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/addDateTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/addDateTime-function.html) adds or subtracts a unit of time to the date or time provided.
+ [https://docs.amazonaws.cn/quicksight/latest/user/addWorkDays-function.html](https://docs.amazonaws.cn/quicksight/latest/user/addWorkDays-function.html) adds or subtracts the given number of work days to the date or time provided.
+ [https://docs.amazonaws.cn/quicksight/latest/user/dateDiff-function.html](https://docs.amazonaws.cn/quicksight/latest/user/dateDiff-function.html) returns the difference in days between two date fields. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/epochDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/epochDate-function.html) converts an epoch date into a standard date. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/extract-function.html](https://docs.amazonaws.cn/quicksight/latest/user/extract-function.html) returns a specified portion of a date value. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/formatDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/formatDate-function.html) formats a date using a pattern you specify. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/isWorkDay-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isWorkDay-function.html) returns TRUE if a given date-time value is a work or business day.
+ [https://docs.amazonaws.cn/quicksight/latest/user/netWorkDays-function.html](https://docs.amazonaws.cn/quicksight/latest/user/netWorkDays-function.html) returns the number of working days between the provided two date values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/now-function.html](https://docs.amazonaws.cn/quicksight/latest/user/now-function.html) returns the current date and time, using either settings for a database, or UTC for file and Salesforce. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/truncDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/truncDate-function.html) returns a date value that represents a specified portion of a date. 

## Numeric functions


The numeric functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/ceil-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ceil-function.html) rounds a decimal value to the next highest integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/decimalToInt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/decimalToInt-function.html) converts a decimal value to an integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/floor-function.html](https://docs.amazonaws.cn/quicksight/latest/user/floor-function.html) decrements a decimal value to the next lowest integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/intToDecimal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/intToDecimal-function.html) converts an integer value to a decimal. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/round-function.html](https://docs.amazonaws.cn/quicksight/latest/user/round-function.html) rounds a decimal value to the closest integer or, if scale is specified, to the closest decimal place. 

## Mathematical functions


The mathematical functions for calculated fields in Amazon Quick include the following: 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html](https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html)(number, divisor)` – Finds the remainder after dividing a number by a divisor.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/log-function.html](https://docs.amazonaws.cn/quicksight/latest/user/log-function.html)(expression) `– Returns the base 10 logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html)(expression) `– Returns the natural logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html](https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html)(expression) `– Returns the absolute value of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html)(expression) `– Returns the square root of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html)(expression) `– Returns the base of natural log *e* raised to the power of a given expression. 

## String functions


The string (text) functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/concat-function.html](https://docs.amazonaws.cn/quicksight/latest/user/concat-function.html) concatenates two or more strings. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/contains-function.html](https://docs.amazonaws.cn/quicksight/latest/user/contains-function.html) checks if an expression contains a substring. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/endsWith-function.html](https://docs.amazonaws.cn/quicksight/latest/user/endsWith-function.html) checks if the expression ends with the substring specified.
+ [https://docs.amazonaws.cn/quicksight/latest/user/left-function.html](https://docs.amazonaws.cn/quicksight/latest/user/left-function.html) returns the specified number of leftmost characters from a string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/locate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/locate-function.html) locates a substring within another string, and returns the number of characters before the substring. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/ltrim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ltrim-function.html) removes preceding blank space from a string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseDate-function.html) parses a string to determine if it contains a date value, and returns the date if found. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseDecimal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseDecimal-function.html) parses a string to determine if it contains a decimal value. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseInt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseInt-function.html) parses a string to determine if it contains an integer value.
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseJson-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseJson-function.html) parses values from a native JSON or from a JSON object in a text field.
+ [https://docs.amazonaws.cn/quicksight/latest/user/replace-function.html](https://docs.amazonaws.cn/quicksight/latest/user/replace-function.html) replaces part of a string with a new string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/right-function.html](https://docs.amazonaws.cn/quicksight/latest/user/right-function.html) returns the specified number of rightmost characters from a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/rtrim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/rtrim-function.html) removes following blank space from a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/split-function.html](https://docs.amazonaws.cn/quicksight/latest/user/split-function.html) splits a string into an array of substrings, based on a delimiter that you choose, and returns the item specified by the position. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/startsWith-function.html](https://docs.amazonaws.cn/quicksight/latest/user/startsWith-function.html) checks if the expression starts with the substring specified.
+ [https://docs.amazonaws.cn/quicksight/latest/user/strlen-function.html](https://docs.amazonaws.cn/quicksight/latest/user/strlen-function.html) returns the number of characters in a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/substring-function.html](https://docs.amazonaws.cn/quicksight/latest/user/substring-function.html) returns the specified number of characters in a string, starting at the specified location. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/toLower-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toLower-function.html) formats a string in all lowercase.
+ [https://docs.amazonaws.cn/quicksight/latest/user/toString-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toString-function.html) formats the input expression as a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/toUpper-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toUpper-function.html) formats a string in all uppercase.
+ [https://docs.amazonaws.cn/quicksight/latest/user/trim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/trim-function.html) removes both preceding and following blank space from a string.

## Table calculations


Table calculations form a group of functions that provide context in an analysis. They provide support for enriched aggregated analysis. By using these calculations, you can address common business scenarios such as calculating percentage of total, running sum, difference, common baseline, and rank. 

When you are analyzing data in a specific visual, you can apply table calculations to the current set of data to discover how dimensions influence measures or each other. Visualized data is your result set based on your current dataset, with all the filters, field selections, and customizations applied. To see exactly what this result set is, you can export your visual to a file. A table calculation function performs operations on the data to reveal relationships between fields. 

**Lookup-based functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/difference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/difference-function.html) calculates the difference between a measure based on one set of partitions and sorts, and a measure based on another. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lag-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lag-function.html) calculates the lag (previous) value for a measure. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lead-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lead-function.html) calculates the lead (following) value for a measure. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentDifference-function.html) calculates the percentage difference between the current value and a comparison value.

**Over functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/avgOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avgOver-function.html) calculates the average of a measure over one or more dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/countOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/countOver-function.html) calculates the count of a field over one or more dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinctCountOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinctCountOver-function.html) calculates the distinct count of the operand partitioned by the specified attributes at a specified level. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/maxOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/maxOver-function.html) calculates the maximum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/minOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/minOver-function.html) the minimum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileOver-function.html) (alias of `percentileDiscOver`) calculates the *n*th percentile of a measure partitioned by a list of dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html) calculates the *n*th percentile based on a continuous distribution of the numbers of a measure partitioned by a list of dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html) calculates the *n*th percentile based on the actual numbers of a measure partitioned by a list of dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentOfTotal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentOfTotal-function.html) calculates the percentage that a measure contributes to the total. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodDifference-function.html) calculates the difference of a measure over two different time periods as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodLastValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodLastValue-function.html) calculates the last (previous) value of a measure from a previous time period as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodPercentDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodPercentDifference-function.html) calculates the percent difference of a measure over two different time periods as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvgOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvgOverTime-function.html) calculates the average of a measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCountOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCountOverTime-function.html) calculates the count of a dimension or measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMaxOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMaxOverTime-function.html) calculates the maximum of a measure or date for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMinOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMinOverTime-function.html) calculates the minimum of a measure or date for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSumOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSumOverTime-function.html) calculates the sum of a measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/sumOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sumOver-function.html) calculates the sum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevOver-function.html) calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevpOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevpOver-function.html) calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varOver-function.html) calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/varpOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varpOver-function.html) calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population. 

**Ranking functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/rank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/rank-function.html) calculates the rank of a measure or a dimension.
+ [https://docs.amazonaws.cn/quicksight/latest/user/denseRank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/denseRank-function.html) calculates the rank of a measure or a dimension, ignoring duplicates.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileRank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileRank-function.html) calculates the rank of a measure or a dimension, based on percentile.

**Running functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningAvg-function.html) calculates a running average for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningCount-function.html) calculates a running count for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningMax-function.html) calculates a running maximum for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningMin-function.html) calculates a running minimum for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningSum-function.html) calculates a running sum for a measure. 

**Window functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/firstValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/firstValue-function.html) calculates the first value of the aggregated measure or dimension partitioned and sorted by specified attributes. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lastValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lastValue-function.html) calculates the last value of the aggregated measure or dimension partitioned and sorted by specified attributes. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowAvg-function.html) calculates the average of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowCount-function.html) calculates the count of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowMax-function.html) calculates the maximum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowMin-function.html) calculates the minimum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowSum-function.html) calculates the sum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.

# Functions


In this section, you can find a list of functions available in Amazon Quick. To view a list of functions sorted by category, with brief definitions, see [Functions by category](https://docs.amazonaws.cn/quicksight/latest/user/functions-by-category.html).

**Topics**
+ [

# addDateTime
](addDateTime-function.md)
+ [

# addWorkDays
](addWorkDays-function.md)
+ [

# Abs
](abs-function.md)
+ [

# Ceil
](ceil-function.md)
+ [

# Coalesce
](coalesce-function.md)
+ [

# Concat
](concat-function.md)
+ [

# contains
](contains-function.md)
+ [

# decimalToInt
](decimalToInt-function.md)
+ [

# dateDiff
](dateDiff-function.md)
+ [

# endsWith
](endsWith-function.md)
+ [

# epochDate
](epochDate-function.md)
+ [

# Exp
](exp-function.md)
+ [

# Extract
](extract-function.md)
+ [

# Floor
](floor-function.md)
+ [

# formatDate
](formatDate-function.md)
+ [

# Ifelse
](ifelse-function.md)
+ [

# in
](in-function.md)
+ [

# intToDecimal
](intToDecimal-function.md)
+ [

# isNotNull
](isNotNull-function.md)
+ [

# isNull
](isNull-function.md)
+ [

# isWorkDay
](isWorkDay-function.md)
+ [

# Left
](left-function.md)
+ [

# Locate
](locate-function.md)
+ [

# Log
](log-function.md)
+ [

# Ln
](ln-function.md)
+ [

# Ltrim
](ltrim-function.md)
+ [

# Mod
](mod-function.md)
+ [

# netWorkDays
](netWorkDays-function.md)
+ [

# Now
](now-function.md)
+ [

# notIn
](notIn-function.md)
+ [

# nullIf
](nullIf-function.md)
+ [

# parseDate
](parseDate-function.md)
+ [

# parseDecimal
](parseDecimal-function.md)
+ [

# parseInt
](parseInt-function.md)
+ [

# parseJson
](parseJson-function.md)
+ [

# Replace
](replace-function.md)
+ [

# Right
](right-function.md)
+ [

# Round
](round-function.md)
+ [

# Rtrim
](rtrim-function.md)
+ [

# Split
](split-function.md)
+ [

# Sqrt
](sqrt-function.md)
+ [

# startsWith
](startsWith-function.md)
+ [

# Strlen
](strlen-function.md)
+ [

# Substring
](substring-function.md)
+ [

# switch
](switch-function.md)
+ [

# toLower
](toLower-function.md)
+ [

# toString
](toString-function.md)
+ [

# toUpper
](toUpper-function.md)
+ [

# trim
](trim-function.md)
+ [

# truncDate
](truncDate-function.md)

# addDateTime


`addDateTime` adds or subtracts a unit of time from a datetime value. For example, `addDateTime(2,'YYYY',parseDate('02-JUL-2018', 'dd-MMM-yyyy') )` returns `02-JUL-2020`. You can use this function to perform date math on your date and time data. 

## Syntax


```
addDateTime(amount, period, datetime)
```

## Arguments


 *amount*   
A positive or negative integer value that represents the amount of time that you want to add or subtract from the provided datetime field. 

 *period*   
A positive or negative value that represents the amount of time that you want to add or subtract from the provided datetime field. Valid periods are as follows:   
+ YYYY: This returns the year portion of the date. 
+ Q: This returns the quarter that the date belongs to (1–4). 
+ MM: This returns the month portion of the date. 
+ DD: This returns the day portion of the date. 
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick. 
+ HH: This returns the hour portion of the date. 
+ MI: This returns the minute portion of the date. 
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

 *datetime*   
The date or time that you want to perform date math on. 

## Return type


Datetime

## Example


Let's say you have a field called `purchase_date` that has the following values.

```
2018 May 13 13:24
2017 Jan 31 23:06
2016 Dec 28 06:45
```

Using the following calculations, `addDateTime` modifies the values as shown following.

```
addDateTime(-2, 'YYYY', purchaseDate)

2016 May 13 13:24
2015 Jan 31 23:06
2014 Dec 28 06:45


addDateTime(4, 'DD', purchaseDate)

2018 May 17 13:24
2017 Feb 4 23:06
2017 Jan 1 06:45


addDateTime(20, 'MI', purchaseDate)

2018 May 13 13:44
2017 Jan 31 23:26
2016 Dec 28 07:05
```

# addWorkDays


`addWorkDays` Adds or subtracts a designated number of work days to a given date value. The function returns a date for a work day, that falls a designated work days after or before a given input date value. 

## Syntax


```
addWorkDays(initDate, numWorkDays)
```

## Arguments


*initDate*  
A valid non-NULL date that acts as the start date for the calculation.   
+ **Dataset field** – Any `date` field from the dataset that you are adding this function to.
+ **Date function** – Any date output from another `date` function, for example `parseDate`, `epochDate`, `addDateTime`., and so on.  
**Example**  

  ```
  addWorkDays(epochDate(1659484800), numWorkDays)
  ```
+ **Calculated fields** – Any Quick calculated field that returns a `date` value.  
**Example**  

  ```
  calcFieldStartDate = addDateTime(10, “DD”, startDate)
  addWorkDays(calcFieldStartDate, numWorkDays)
  ```
+ **Parameters** – Any Quick `datetime` parameter.  
**Example**  

  ```
  addWorkDays($paramStartDate, numWorkDays)
  ```
+ Any combination of the above stated argument values.

 *numWorkDays*   
A non-NULL integer that acts as the end date for the calculation.   
+ **Literal** – An integer literal directly typed in the expression editor.  
**Example**  

  ```
  ```
+ **Dataset field** – Any date field from the dataset   
**Example**  

  ```
  ```
+ **Scalar function or calculation** – Any scalar Quick function that returns an integer output from another, for example `decimalToInt`, `abs`, and so on.  
**Example**  

  ```
  addWorkDays(initDate, decimalToInt(sqrt (abs(numWorkDays)) ) )
  ```
+ **Calculated field** – Any Quick calculated field that returns a `date` value.  
**Example**  

  ```
  someOtherIntegerCalcField = (num_days * 2) + 12
  addWorkDays(initDate, someOtherIntegerCalcField)
  ```
+ **Parameter** – Any Quick `datetime` parameter.  
**Example**  

  ```
  addWorkDays(initDate, $param_numWorkDays)
  ```
+ Any combination of the above stated argument values.

## Return type


Integer 

## Ouptut values


Expected output values include:
+ Positive integer (when start\$1date < end\$1date)
+ Negative integer (when start\$1date > end\$1date)
+ NULL when one or both of the arguments get a null value from the `dataset field`.

## Input errors


Disallowed argument values cause errors, as shown in the following examples.
+ Using a literal NULL as an argument in the expression is disallowed.  
**Example**  

  ```
  addWorkDays(NULL, numWorkDays) 
  ```  
**Example**  

  ```
  Error
  At least one of the arguments in this function does not have correct type. 
  Correct the expression and choose Create again.
  ```
+ Using a string literal as an argument, or any other data type other than a date, in the expression is disallowed. In the following example, the string **"2022-08-10"** looks like a date, but it is actually a string. To use it, you would have to use a function that converts to a date data type.  
**Example**  

  ```
  addWorkDays("2022-08-10", 10)
  ```  
**Example**  

  ```
  Error
  Expression addWorkDays("2022-08-10", numWorkDays) for function addWorkDays has 
  incorrect argument type addWorkDays(String, Number). 
  Function syntax expects Date, Integer.
  ```

## Example


A positive integer as `numWorkDays` argument will yield a date in the future of the input date. A negative integer as `numWorkDays` argument will yield a resultant date in the past of the input date. A zero value for the `numWorkDays` argument yields the same value as input date whether or not it falls on a work day or a weekend.

The `addWorkDays` function operates at the granularity: `DAY`. Accuracy cannot be preserved at any granularity which is lower or higher than `DAY` level.

```
addWorkDays(startDate, endDate)
```

Let’s assume there is a field named `employmentStartDate` with the following values: 

```
2022-08-10 2022-08-06 2022-08-07 
```

Using the above field and following calculations, `addWorkDays` returns the modified values as shown below:

```
addWorkDays(employmentStartDate, 7)

2022-08-19 
2022-08-16 
2022-08-16 

addWorkDays(employmentStartDate, -5)

2022-08-02 
2022-08-01 
2022-08-03 

addWorkDays(employmentStartDate, 0)

2022-08-10 
2022-08-06 
2022-08-07
```

The following example calculates the total pro-rated bonus to be paid to each employee for 2 years based on how many days each employee has actually worked.

```
last_day_of_work = addWorkDays(employment_start_date, 730)
total_days_worked = netWorkDays(employment_start_date, last_day_of_work)
total_bonus = total_days_worked * bonus_per_day
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/addWorkDays-function-example.png)


# Abs


`abs` returns the absolute value of a given expression. 

## Syntax


```
abs(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ceil


`ceil` rounds a decimal value to the next highest integer. For example, `ceil(29.02)` returns `30`.

## Syntax


```
ceil(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example rounds a decimal field to the next highest integer.

```
ceil(salesAmount)
```

The following are the given field values.

```
20.13
892.03
57.54
```

For these field values, the following values are returned.

```
21
893
58
```

# Coalesce


`coalesce` returns the value of the first argument that is not null. When a non-null value is found, the remaining arguments in the list are not evaluated. If all arguments are null, the result is null. 0-length strings are valid values and are not considered equivalent to null.

## Syntax


```
coalesce(expression1, expression2 [, expression3, ...])
```

## Arguments


`coalesce` takes two or more expressions as arguments. All of the expressions must have the same data type or be able to be implicitly cast to the same data type.

 *expression*   
The expression can be numeric, datetime, or string. It can be a field name, a literal value, or another function. 

## Return type


`coalesce` returns a value of the same data type as the input arguments.

## Example


The following example retrieves a customer's billing address if it exists, her street address if there is no billing address, or returns "No address listed" if neither address is available.

```
coalesce(billingAddress, streetAddress, 'No address listed')
```

# Concat
Concat

`concat` concatenates two or more strings.

## Syntax


```
concat(expression1, expression2 [, expression3 ...])
```

## Arguments


`concat` takes two or more string expressions as arguments. 

 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Examples


The following example concatenates three string fields and adds appropriate spacing.

```
concat(salutation, ' ', firstName, ' ', lastName)
```

The following are the given field values.

```
salutation     firstName          lastName
-------------------------------------------------------
Ms.            Li                  Juan
Dr.            Ana Carolina        Silva
Mr.            Nikhil              Jayashankar
```

For these field values, the following values are returned.

```
Ms. Li Juan
Dr. Ana Carolina Silva
Mr. Nikhil Jayashankar
```

The following example concatenates two string literals.

```
concat('Hello', 'world')
```

The following value is returned.

```
Helloworld
```

# contains
contains

`contains` evaluates if the substring that you specify exists within an expression. If the expression contains the substring, contains returns true, and otherwise it returns false.

## Syntax


```
contains(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` contains **New**.

```
contains(state_nm, "New")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` contains **new**.

```
contains(state_nm, "new", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The contains function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if `state_nm` contains **New**.

```
sumIf(Sales,contains(state_nm, "New"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not contain the specified substring. 

```
NOT(contains(state_nm, "New"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
contains(state_nm, toString(5) )
```

# decimalToInt


`decimalToInt` converts a decimal value to the integer data type by stripping off the decimal point and any numbers after it. `decimalToInt` does not round up. For example, `decimalToInt(29.99)` returns `29`.

## Syntax


```
decimalToInt(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example converts a decimal field to an integer.

```
decimalToInt(salesAmount)
```

The following are the given field values.

```
 20.13
892.03
 57.54
```

For these field values, the following values are returned.

```
 20
892
 57
```

# dateDiff


`dateDiff` returns the difference in days between two date fields. If you include a value for the period, `dateDiff` returns the difference in the period interval, rather than in days.

## Syntax


```
dateDiff(date1, date2,[period])
```

## Arguments


`dateDiff` takes two dates as arguments. Specifying a period is optional.

 *date 1*   
The first date in the comparison. A date field or a call to another function that outputs a date. 

 *date 2*   
The second date in the comparison. A date field or a call to another function that outputs a date. 

 *period*   
The period of difference that you want returned, enclosed in quotes. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the date of the first day of the quarter that the date belongs to. 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

## Return type


Integer

## Example


The following example returns the difference between two dates.

```
dateDiff(orderDate, shipDate, "MM")
```

The following are the given field values.

```
orderDate          shipdate
=============================
01/01/18            03/05/18
09/13/17            10/20/17
```

For these field values, the following values are returned.

```
2
1
```

# endsWith
endsWith

`endsWith` evaluates if the expression ends with a substring that you specify. If the expression ends with the substring, `endsWith` returns true, and otherwise it returns false.

## Syntax


```
endsWith(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` endsWith **"York"**.

```
endsWith(state_nm, "York")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` endsWith **"york"**.

```
endsWith(state_nm, "york", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The `endsWith` function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if `state_nm` ends with **"York"**.

```
sumIf(Sales,endsWith(state_nm, "York"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not start with the specified substring. 

```
NOT(endsWith(state_nm, "York"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
endsWith(state_nm, toString(5) )
```

# epochDate


`epochDate` converts an epoch date into a standard date in the format yyyy-MM-dd**T**kk:mm:ss.SSS**Z**, using the format pattern syntax specified in [Class DateTimeFormat](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html) in the Joda project documentation. An example is `2015-10-15T19:11:51.003Z`. 

`epochDate` is supported for use with analyses based on datasets stored in Quick (SPICE).

## Syntax


```
epochDate(epochdate)
```

## Arguments


 *epochdate*   
An epoch date, which is an integer representation of a date as the number of seconds since 00:00:00 UTC on January 1, 1970.   
*epochdate* must be an integer. It can be the name of a field that uses the integer data type, a literal integer value, or a call to another function that outputs an integer. If the integer value is longer than 10 digits, the digits after the 10th place are discarded.

## Return type


Date

## Example


The following example converts an epoch date to a standard date.

```
epochDate(3100768000)
```

The following value is returned.

```
2068-04-04T12:26:40.000Z
```

# Exp


`exp` returns the base of natural log e raised to the power of a given expression. 

## Syntax


```
exp(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Extract


`extract` returns a specified portion of a date value. Requesting a time-related portion of a date that doesn't contain time information returns 0.

## Syntax


```
extract(period, date)
```

## Arguments


 *period*   
The period that you want extracted from the date value. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the quarter that the date belongs to (1–4). 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WD: This returns the day of the week as an integer, with Sunday as 1.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.
**Note**  
Extracting milliseconds is not supported in Presto databases below version 0.216.

 *date*   
A date field or a call to another function that outputs a date.

## Return type


Integer

## Example


The following example extracts the day from a date value.

```
extract('DD', orderDate)
```

The following are the given field values.

```
orderDate
=========
01/01/14  
09/13/16
```

For these field values, the following values are returned.

```
01
13
```

# Floor


`floor` decrements a decimal value to the next lowest integer. For example, `floor(29.08)` returns `29`.

## Syntax


```
floor(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example decrements a decimal field to the next lowest integer.

```
floor(salesAmount)
```

The following are the given field values.

```
20.13
892.03
57.54
```

For these field values, the following values are returned.

```
20
892
57
```

# formatDate


`formatDate` formats a date using a pattern you specify. When you are preparing data, you can use `formatDate` to reformat the date. To reformat a date in an analysis, you choose the format option from the context menu on the date field.

## Syntax


```
formatDate(date, ['format'])
```

## Arguments


 *date*   
A date field or a call to another function that outputs a date.

 *format*   
(Optional) A string containing the format pattern to apply. This argument accepts the format patterns specified in [Supported date formats](https://docs.amazonaws.cn/quicksight/latest/user/supported-date-formats.html).  
If you don't specify a format, this string defaults to yyyy-MM-dd**T**kk:mm:ss:SSS.

## Return type


String

## Example


The following example formats a UTC date.

```
formatDate(orderDate, 'dd-MMM-yyyy')
```

The following are the given field values.

```
order date      
=========
2012-12-14T00:00:00.000Z  
2013-12-29T00:00:00.000Z
2012-11-15T00:00:00.000Z
```

For these field values, the following values are returned.

```
13 Dec 2012
28 Dec 2013
14 Nov 2012
```

## Example


If the date contains single quotes or apostrophes, for example `yyyyMMdd'T'HHmmss`, you can handle this date format by using one of the following methods.
+ Enclose the entire date in double quotes, as shown in the following example:

  ```
  formatDate({myDateField}, "yyyyMMdd'T'HHmmss")
  ```
+ Escape the single quotes or apostrophes by adding a backslash ( `\` ) to the left of them, as shown in the following example: 

  ```
  formatDate({myDateField}, 'yyyyMMdd\'T\'HHmmss')
  ```

# Ifelse


`ifelse` evaluates a set of *if*, *then* expression pairings, and returns the value of the *then* argument for the first *if* argument that evaluates to true. If none of the *if* arguments evaluate to true, then the value of the *else* argument is returned.

## Syntax


```
ifelse(if-expression-1, then-expression-1 [, if-expression-n, then-expression-n ...], else-expression)
```

## Arguments


`ifelse` requires one or more *if*,*then* expression pairings, and requires exactly one expression for the *else* argument. 

 *if-expression*   
The expression to be evaluated as true or not. It can be a field name like **address1**, a literal value like **'Unknown'**, or another function like `toString(salesAmount)`. An example is `isNotNull(FieldName)`.   
If you use multiple AND and OR operators in the `if` argument, enclose statements in parentheses to identify processing order. For example, the following `if` argument returns records with a month of 1, 2, or 5 and a year of 2000.  

```
ifelse((month = 5 OR month < 3) AND year = 2000, 'yes', 'no')
```
The next `if` argument uses the same operators, but returns records with a month of 5 and any year, or with a month of 1 or 2 and a year of 2000.  

```
ifelse(month = 5 OR (month < 3 AND year = 2000), 'yes', 'no')
```

 *then-expression*   
The expression to return if its *if* argument is evaluated as true. It can be a field name like **address1**, a literal value like **'Unknown'**, or a call to another function. The expression must have the same data type as the other `then` arguments and the `else` argument. 

 *else-expression*   
The expression to return if none of the *if* arguments evaluate as true. It can be a field name like **address1**, a literal value like **'Unknown'**, or another function like `toString(salesAmount)`. The expression must have the same data type as all of the `then` arguments. 

## Return type


`ifelse` returns a value of the same data type as the values in *then-expression*. All data returned *then* and *else* expressions must be of the same data type or be converted to the same data type. 

## Examples


The following example generates a column of aliases for field `country`.

```
ifelse(country = "United States", "US", country = "China", "CN", country = "India", "IN", "Others") 
```

For such use cases evaluating each value in a field against a list of literals, and returns the result corresponding to the first matching value., function switch is recommended to simplify your work. The previous example can be rewritten to the following statement using [https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html](https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html):

```
switch(country,"United States","US","China","CN","India","IN","Others")
```

The following example categorizes sales per customer into human-readable levels.

```
ifelse(salesPerCustomer < 1000, “VERY_LOW”, salesPerCustomer < 10000, “LOW”, salesPerCustomer < 100000, “MEDIUM”, “HIGH”)
```

The following example uses AND, OR, and NOT to compare multiple expressions using conditional operators to tag top customers NOT in Washington or Oregon with a special promotion, who made more than 10 orders. If no values are returned, the value `'n/a'` is used.

```
ifelse(( (NOT (State = 'WA' OR State =  'OR')) AND Orders > 10),  'Special Promotion XYZ',  'n/a')
```

The following examples use only OR to generate a new column that contains the name of continent that corresponds to each `country`.

```
ifelse(country = "United States" OR country = "Canada", "North America", country = "China" OR country = "India" OR country = "Japan", "Asia", "Others")
```

The previous example can be simplified as shown in the next example. The following example uses `ifelse` and [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) to create a value in a new column for any row where the tested value is in a literal list. You could use `ifelse` with [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) as well.

```
ifelse(in(country,["United States", "Canada"]), "North America", in(country,["China","Japan","India"]),"Asia","Others")
```

Authors are able to save a literal list in a multivalue parameter and use it in the [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) or [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) functions. The following example is an equivalent of the previous example, except that the literal lists are stored in two multivalue parameters. 

```
ifelse(in(country,${NorthAmericaCountryParam}), "North America", in(country,${AsiaCountryParam}),"Asia", "Others") 
```

The following example assigns a group to a sales record based on the sales total. The structure of each `if-then` phrase mimics the behavior of *between*, a keyword that doesn't currently work in calculated field expressions. For example, the result of the comparison `salesTotal >= 0 AND salesTotal < 500` returns the same values as the SQL comparison `salesTotal between 0 and 499`.

```
ifelse(salesTotal >= 0 AND salesTotal < 500, 'Group 1', salesTotal >= 500 AND salesTotal < 1000, 'Group 2', 'Group 3')
```

The following example tests for a NULL value by using `coalesce` to return the first non-NULL value. Instead of needing to remember the meaning of a NULL in a date field, you can use a readable description instead. If the disconnect date is NULL, the example returns the suspend date, unless both of those are NULL. Then `coalesce(DiscoDate, SuspendDate, '12/31/2491')` returns `'12/31/2491'`. The return value must match the other data types. This date might seem like an unusual value, but a date in the 25th century reasonably simulates the "end of time," defined as the highest date in a data mart. 

```
ifelse (  (coalesce(DiscoDate, SuspendDate, '12/31/2491') = '12/31/2491'),  'Active subscriber', 'Inactive subscriber')
```

The following shows a more complex example in a more readable format, just to show that you don't need to compress your code all into one long line. This example provides for multiple comparisons of the value a survey result. It handles potential NULL values for this field and categorizes two acceptable ranges. It also labels one range that needs more testing and another that's not valid (out of range). For all remaining values, it applies the `else` condition, and labels the row as needing a retest three years after the date on that row. 

```
ifelse
( 
    isNull({SurveyResult}), 'Untested',  
    {SurveyResult}=1, 'Range 1', 
    {SurveyResult}=2, 'Range 2', 
    {SurveyResult}=3, 'Need more testing',
    {SurveyResult}=99, 'Out of Range',
    concat  
    (
        'Retest by ', 
        toString    
        (
           addDateTime(3, "YYYY", {Date}) 
        )
    )
)
```

The following example assigns a "manually" created region name to a group of states. It also uses spacing and comments, wrapped in `/* */`, to make it easier to maintain the code. 

```
ifelse 
(    /* NE REGION*/
     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
    'Northeast',

     /* SE REGION*/
     locate('Georgia, Alabama, South Carolina, Louisiana',{State}) > 0,
    'Southeast',

    'Other Region'
)
```

The logic for the region tagging breaks down as follows:

1. We list the states that we want for each region, enclosing each list in quotation marks to make each list a string, as follows: 
   + `'New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire'`
   + `'Georgia, Alabama, South Carolina, Louisiana'`
   + You can add more sets, or use countries, cities, provinces, or What3Words if you want. 

1. We ask if the value for `State` (for each row) is found in the list, by using the `locate` function to return a nonzero value if the state is found in the list, as follows.

   ```
   locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) 
   
   and
   
   locate('Georgia, Alabama, South Carolina, Louisiana',{State})
   ```

1. The `locate` function returns a number instead of a `TRUE` or `FALSE`, but `ifelse` requires the `TRUE`/`FALSE` Boolean value. To get around this, we can compare the result of `locate` to a number. If the state is in the list, the return value is greater than zero.

   1. Ask if the state is present.

      ```
      locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0
      ```

   1. If it's present the region, label it as the specific region, in this case a Northeast region.

      ```
      /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
      /*The then expression:*/   'Northeast',
      ```

1. Because we have states that aren't in a list, and because `ifelse` requires a single `else` expression, we provide `'Other Region'` as the label for the leftover states. 

   ```
   /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
   /*The then expression:*/   'Northeast',
   /*The else expression:*/   'Other Region'
   ```

1. We wrap all that in the `ifelse( )` function to get the final version. The following example leaves out the Southeast region states that were in the original. You can add them back in place of the *`<insert more regions here>`* tag. 

   If you want to add more regions, you can construct more copies of those two lines and alter the list of states to suit your purpose. You can change the region name to something that suits you, and change the field name from `State` to anything that you need. 

   ```
   ifelse 
   (
   /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
   /*The then expression:*/   'Northeast',
   
   /*<insert more regions here>*/
   
   /*The else expression:*/   'Other Region'
   )
   ```
**Note**  
There are other ways to do the initial comparison for the if expression. For example, suppose that you pose the question "What states are not missing from this list?" rather than "Which states are on the list?" If you do, you might phrase it differently. You might compare the locate statement to zero to find values that are missing from the list, and then use the NOT operator to classify them as "not missing," as follows.  

   ```
   /*The if expression:*/      NOT (locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) = 0),
   ```
Both versions are correct. The version that you choose should make the most sense to you and your team, so you can maintain it easily. If all the options seem equal, choose the simplest.

# in
in

`in` evaluates if an expression exists within a literal list. If the list contains the expression, in returns true, and otherwise it returns false. `in` is case sensitive for string type inputs.

`in` accepts two kinds of literal list, one is manually entered list and the other is a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html).

## Syntax


Using a manually entered list:

```
in(expression, [literal-1, ...])  
```

Using a multivalue parameter:

```
in(expression, $multivalue_parameter)
```

## Arguments


 *expression*   
The expression to be compared with the elements in literal list. It can be a field name like `address`, a literal value like ‘ **Unknown**’, a single value parameter, or a call to another scalar function—provided this function is not an aggregate function or a table calculation.

 *literal list*   
(required) This can be a manually entered list or a multivalue parameter. This argument accepts up to 5,000 elements. However, in a direct query to a third party data source, for example Oracle or Teradata, the restriction can be smaller.  
+ ***manually entered list*** – One or more literal values in a list to be compared with the expression. The list should be enclosed in square brackets. All the literals to compare must have the same datatype as the expression. 
+ ***multivalue parameter*** – A pre-defined multivalue parameter passed in as a literal list. The multivalue parameter must have the same datatype as the expression. 


## Return type


Boolean: TRUE/FALSE

## Example with a static list


The following example evaluates the `origin_state_name` field for values in a list of string. When comparing string type input, `in` only supports case sensitive comparison.

```
in(origin_state_name,["Georgia", "Ohio", "Texas"])
```

The following are the given field values.

```
"Washington"
        "ohio"
        "Texas"
```

For these field values the following values are returned.

```
false
        false
        true
```

The third return value is true because only "Texas" is one of the included values.

The following example evaluates the `fl_date` field for values in a list of string. In order to match the type, `toString` is used to cast the date type to string type.

```
in(toString(fl_date),["2015-05-14","2015-05-15","2015-05-16"])
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/in-function-example-manual-list.png)


Literals and NULL values are supported in expression argument to be compared with the literals in list. Both of the following two examples will generate a new column of TRUE values. 

```
in("Washington",["Washington","Ohio"])
```

```
in(NULL,[NULL,"Ohio"])
```

## Example with mutivalue parameter


Let's say an author creates a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html) that contains a list of all the state names. Then the author adds a control to allow the reader to select values from the list.

Next, the reader selects three values—"Georgia", "Ohio", and "Texas"—from the parameter's drop down list control. In this case, the following expression is equivalent to the first example, where those three state names are passed as the literal list to be compared with the `original_state_name` field. 

```
in (origin_state_name, ${stateName MultivalueParameter})
```

## Example with `ifelse`


`in` can be nested in other functions as a boolean value. One example is that authors can evaluate any expression in a list and return the value they want by using `in` and `ifelse`. The following example evaluates if the `dest_state_name` of a flight is in a particular list of US states and returns different categories of the states based on the comparison.

```
ifelse(in(dest_state_name,["Washington", "Oregon","California"]), "WestCoastUSState", "Other US State")
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/in-function-with-ifelse.png)


# intToDecimal


`intToDecimal` converts an integer value to the decimal data type.

## Syntax


```
intToDecimal(integer)
```

## Arguments


 *int*   
A field that uses the integer data type, a literal value like **14**, or a call to another function that outputs an integer.

## Return type


Decimal(Fixed) in the legacy data preparation experience.

Decimal(Float) in the new data preparation experience.

## Example


The following example converts an integer field to a decimal.

```
intToDecimal(price)
```

The following are the given field values.

```
20
892
57
```

For these field values, the following values are returned.

```
20.0
892.0
58.0
```

You can apply formatting inside an analysis, for example to format `price` as currency. 

# isNotNull


`isNotNull` evaluates an expression to see if it is not null. If the expression is not null, `isNotNull` returns true, and otherwise it returns false.

## Syntax


```
isNotNull(expression)
```

## Arguments


 *expression*   
The expression to be evaluated as null or not. It can be a field name like **address1** or a call to another function that outputs a string. 

## Return type


Boolean

## Example


The following example evaluates the sales\$1amount field for null values.

```
isNotNull(salesAmount)
```

The following are the given field values.

```
20.13
(null)
57.54
```

For these field values, the following values are returned.

```
true
false
true
```

# isNull


`isNull` evaluates an expression to see if it is null. If the expression is null, `isNull` returns true, and otherwise it returns false.

## Syntax


```
isNull(expression)
```

## Arguments


 *expression*   
The expression to be evaluated as null or not. It can be a field name like **address1** or a call to another function that outputs a string. 

## Return type


Boolean

## Example


The following example evaluates the sales\$1amount field for null values.

```
isNull(salesAmount)
```

The following are the given field values.

```
20.13
(null)
57.54
```

For these field values, the following values are returned.

```
false
true
false
```

The following example tests for a NULL value in an `ifelse` statement, and returns a human-readable value instead.

```
ifelse( isNull({ActiveFlag}) , 'Inactive',  'Active') 
```

# isWorkDay


`isWorkDay` evaluates a given date-time value to determine if the value is a workday or not.

`isWorkDay` assumes a standard 5-day work week starting from Monday and ending on Friday. Saturday and Sunday are assumed to be weekends. The function always calculates its result at the `DAY` granularity and is exclusive of the given input date.

## Syntax


```
isWorkDay(inputDate)
```

## Arguments


 *inputDate*   
The date-time value that you want to evaluate. Valid values are as follows:  
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.

## Return type


Integer (`0` or `1`)

## Example


The following exaple determines whether or not the `application_date` field is a work day.

Let's assume that there's a field named `application_date` with the following values:

```
2022-08-10 
2022-08-06 
2022-08-07
```

When you use these fields and add the following calculations, `isWorkDay` returns the below values:

```
isWorkDay({application_date})     
                                                     
1
0
0
```

The following example filters employees whose employment ends on a work day and determines whether their employment began on work day or a weekend using conditional formatting:

```
is_start_date_work_day = isWorkDay(employment_start_date)
is_end_date_work_day = isWorkDay(employment_end_date)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/isWorkDay-example.png)


# Left
Left

`left` returns the leftmost characters from a string, including spaces. You specify the number of characters to be returned. 

## Syntax


```
left(expression, limit)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *limit*   
The number of characters to be returned from *expression*, starting from the first character in the string.

## Return type


String

## Example


The following example returns the first 3 characters from a string.

```
left('Seattle Store #14', 3)
```

The following value is returned.

```
Sea
```

# Locate
Locate

`locate` locates a substring that you specify within another string, and returns the number of characters until the first character in the substring. The function returns 0 if it doesn't find the substring. The function is 1-based.

## Syntax


```
locate(expression, substring, start)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters in *expression* that you want to locate. The substring can occur one or more times in *expression*.

 *start*   
(Optional) If *substring* occurs more than once, use *start* to identify where in the string the function should start looking for the substring. For example, suppose that you want to find the second example of a substring and you think it typically occurs after the first 10 characters. You specify a *start* value of 10. It should start from 1.

## Return type


Integer

## Examples


The following example returns information about where the first occurrence of the substring 'and' appears in a string.

```
locate('1 and 2 and 3 and 4', 'and')
```

The following value is returned.

```
3
```

The following example returns information about where the first occurrence of the substring 'and' appears in a string after the fourth character.

```
locate('1 and 2 and 3 and 4', 'and', 4)
```

The following value is returned.

```
9
```

# Log


`log` returns the base 10 logarithm of a given expression.

## Syntax


```
log(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ln


`ln` returns the natural logarithm of a given expression. 

## Syntax


```
ln(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ltrim
Ltrim

`ltrim` removes preceding blank space from a string.

## Syntax


```
ltrim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the preceding spaces from a string.

```
ltrim('   Seattle Store #14')
```

The following value is returned.

```
Seattle Store #14
```

# Mod


Use the `mod` function to find the remainder after dividing the number by the divisor. You can use the `mod` function or the modulo operator (%) interchangeably.

## Syntax


```
mod(number, divisor)
```

```
number%divisor
```

## Arguments


 *number*   
The number is the positive integer that you want to divide and find the remainder for. 

 *divisor*   
The divisor is the positive integer that you are dividing by. If the divisor is zero, this function returns an error on dividing by 0.

## Example


The following examples return the modulo of 17 when dividing by 6. The first example uses the % operator, and the second example uses the mod function.

```
17%6
```

```
mod( 17, 6 )
```

The following value is returned.

```
5
```

# netWorkDays


`netWorkDays` returns the number of working days between the provided two date fields or even custom date values generated using other Quick date functions such as `parseDate` or `epochDate` as an integer. 

`netWorkDays` assumes a standard 5-day work week starting from Monday and ending on Friday. Saturday and Sunday are assumed to be weekends. The calculation is inclusive of both `startDate` and `endDate`. The function operates on and shows results for DAY granularity. 

## Syntax


```
netWorkDays(startDate, endDate)
```

## Arguments


 *startDate*   
A valid non-NULL date that acts as the start date for the calculation.   
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.
+ Any combination of the above stated argument values.

 *endDate*   
A valid non-NULL date that acts as the end date for the calculation.   
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.
+ Any combination of the above stated argument values.

## Return type


Integer 

## Ouptut values


Expected output values include:
+ Positive integer (when start\$1date < end\$1date)
+ Negative integer (when start\$1date > end\$1date)
+ NULL when one or both of the arguments get a null value from the `dataset field`.

## Example


The following example returns the number of work days falling between two dates.

Let's assume that there's a field named `application_date` with the following values:

```
netWorkDays({startDate}, {endDate})
```

The following are the given field values.

```
startDate	endDate	netWorkDays
        9/4/2022	9/11/2022	5
        9/9/2022	9/2/2022	-6
        9/10/2022	9/11/2022	0
        9/12/2022	9/12/2022	1
```

The following example calculates the number of days worked by each employee and the salary expended per day for each employee:

```
days_worked = netWorkDays({employment_start_date}, {employment_end_date})
        salary_per_day = {salary}/{days_worked}
```

The following example filters employees whose employment ends on a work day and determines whether their employment began on work day or a weekend using conditional formatting:

```
is_start_date_work_day = netWorkDays(employment_start_date)
        is_end_date_work_day = netWorkDays(employment_end_date)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/netWorkDays-function-example.png)


# Now


For database datasets that directly query the database, `now` returns the current date and time using the settings and format specified by the database server. For SPICE and Salesforce data sets, `now` returns the UTC date and time, in the format `yyyy-MM-ddTkk:mm:ss:SSSZ` (for example, 2015-10-15T19:11:51:003Z). 

## Syntax


```
now()
```

## Return type


Date

# notIn
notIn

`notIn` evaluates if an expression exists within a literal list. If the list doesn’t contain the expression, `notIn` returns true, and otherwise it returns false. `notIn` is case sensitive for string type inputs.

`notIn` accepts two kinds of literal list, one is manually entered list and the other is a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html).

## Syntax


Using a manually entered list:

```
notIn(expression, [literal-1, ...])  
```

Using a multivalue parameter:

```
notIn(expression, $multivalue_parameter)
```

## Arguments


 *expression*   
The expression to be compared with the elements in literal list. It can be a field name like `address`, a literal value like ‘ **Unknown**’, a single value parameter, or a call to another scalar function—provided this function is not an aggregate function or a table calculation.

 *literal list*   
(required) This can be a manually entered list or a multivalue parameter. This argument accepts up to 5,000 elements. However, in a direct query to a third party data source, for example Oracle or Teradata, the restriction can be smaller.  
+ ***manually entered list*** – One or more literal values in a list to be compared with the expression. The list should be enclosed in square brackets. All the literals to compare must have the same datatype as the expression. 
+ ***multivalue parameter*** – A pre-defined multivalue parameter passed in as a literal list. The multivalue parameter must have the same datatype as the expression. 


## Return type


Boolean: TRUE/FALSE

## Example with a manually entered list


The following example evaluates the `origin_state_name` field for values in a list of string. When comparing string type input, `notIn` only supports case sensitive comparison.

```
notIn(origin_state_name,["Georgia", "Ohio", "Texas"])
```

The following are the given field values.

```
"Washington"
        "ohio"
        "Texas"
```

For these field values the following values are returned.

```
true
        true
        false
```

The third return value is false because only "Texas" is one of the excluded values.

The following example evaluates the `fl_date` field for values in a list of string. In order to match the type, `toString` is used to cast the date type to string type.

```
notIn(toString(fl_date),["2015-05-14","2015-05-15","2015-05-16"])
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/notin-function-example-manual-list.png)


Literals and NULL values are supported in expression argument to be compared with the literals in list. Both of the following two examples will generate a new column of FALSE values. 

```
notIn("Washington",["Washington","Ohio"])
```

```
notIn(NULL,[NULL,"Ohio"])
```

## Example with mutivalue parameter


Let's say an author creates a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html) that contains a list of all the state names. Then the author adds a control to allow the reader to select values from the list.

Next, the reader selects three values—"Georgia", "Ohio", and "Texas"—from the parameter's drop down list control. In this case, the following expression is equivalent to the first example, where those three state names are passed as the literal list to be compared with the `original_state_name` field. 

```
notIn (origin_state_name, ${stateName MultivalueParameter})
```

## Example with `ifelse`


`notIn` can be nested in other functions as a boolean value. One example is that authors can evaluate any expression in a list and return the value they want by using `notIn` and `ifelse`. The following example evaluates if the `dest_state_name` of a flight is in a particular list of US states and returns different categories of the states based on the comparison.

```
ifelse(notIn(dest_state_name,["Washington", "Oregon","California"]), "notWestCoastUSState", "WestCoastUSState")
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/notin-function-with-ifelse.png)


# nullIf


`nullIf` compares two expressions. If they are equal, the function returns null. If they are not equal, the function returns the first expression.

## Syntax


```
nullIf(expression1, expression2)
```

## Arguments


`nullIf` takes two expressions as arguments. 

 *expression*   
The expression can be numeric, datetime, or string. It can be a field name, a literal value, or another function. 

## Return type


String

## Example


The following example returns nulls if the reason for a shipment delay is unknown.

```
nullIf(delayReason, 'unknown')
```

The following are the given field values.

```
delayReason
============
unknown         
back ordered 
weather delay
```

For these field values, the following values are returned.

```
(null)
back ordered 
weather delay
```

# parseDate
parseDate

`parseDate` parses a string to determine if it contains a date value, and returns a standard date in the format `yyyy-MM-ddTkk:mm:ss.SSSZ` (using the format pattern syntax specified in [Class DateTimeFormat](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html) in the Joda project documentation), for example 2015-10-15T19:11:51.003Z. This function returns all rows that contain a date in a valid format and skips any rows that don't, including rows that contain null values.

Quick supports dates in the range from Jan 1, 1900 00:00:00 UTC to Dec 31, 2037 23:59:59 UTC. For more information, see [Supported date formats](https://docs.amazonaws.cn/quicksight/latest/user/supported-date-formats.html).

## Syntax


```
parseDate(expression, ['format'])
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'1/1/2016'**, or a call to another function that outputs a string.

 *format*   
(Optional) A string containing the format pattern that *date\$1string* must match. For example, if you are using a field with data like **01/03/2016**, you specify the format 'MM/dd/yyyy'. If you don't specify a format, it defaults to `yyyy-MM-dd`. Rows whose data doesn't conform to *format* are skipped.   
Different date formats are supported based on the type of dataset used. Use the following table to see details of supported date formats.    
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/parseDate-function.html)

## Return type


Date

## Example


The following example evaluates `prodDate` to determine if it contains date values.

```
parseDate(prodDate, 'MM/dd/yyyy')
```

The following are the given field values.

```
prodDate
--------
01-01-1999
12/31/2006
1/18/1982 
7/4/2010
```

For these field values, the following rows are returned.

```
12-31-2006T00:00:00.000Z
01-18-1982T00:00:00.000Z
07-04-2010T00:00:00.000Z
```

# parseDecimal
parseDecimal

`parseDecimal` parses a string to determine if it contains a decimal value. This function returns all rows that contain a decimal, integer, or null value, and skips any rows that don't. If the row contains an integer value, it is returned as a decimal with up to 4 decimal places. For example, a value of '2' is returned as '2.0'.

## Syntax


```
parseDecimal(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'9.62'**, or a call to another function that outputs a string.

## Return type


Decimal(Fixed) in the legacy data preparation experience.

Decimal(Float) in the new data preparation experience.

## Example


The following example evaluates `fee` to determine if it contains decimal values.

```
parseDecimal(fee)
```

The following are the given field values.

```
fee
--------
2
2a
12.13
3b
3.9
(null)
198.353398
```

For these field values, the following rows are returned.

```
2.0
12.13
3.9
(null)
198.3533
```

# parseInt
parseInt

`parseInt` parses a string to determine if it contains an integer value. This function returns all rows that contain a decimal, integer, or null value, and skips any rows that don't. If the row contains a decimal value, it is returned as the nearest integer, rounded down. For example, a value of '2.99' is returned as '2'.

## Syntax


```
parseInt(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'3'**, or a call to another function that outputs a string.

## Return type


Integer

## Example


The following example evaluates `feeType` to determine if it contains integer values.

```
parseInt(feeType)
```

The following are the given field values.

```
feeType
--------
2
2.1
2a
3
3b
(null)
5
```

For these field values, the following rows are returned.

```
2
2
3
(null)
5
```

# parseJson
parseJson

Use `parseJson` to extract values from a JSON object. 

If your dataset is stored in Quick SPICE, you can use `parseJson` when you are preparing a data set, but not in calculated fields during analysis.

For direct query, you can use `parseJson` both during data preparation and analysis. The `parseJson` function applies to either strings or to JSON native data types, depending on the dialect, as shown in the following table.


| Dialect | Type | 
| --- | --- | 
| PostgreSQL | JSON | 
| Amazon Redshift | String | 
| Microsoft SQL Server | String | 
| MySQL | JSON | 
| Teradata | JSON | 
| Oracle | String | 
| Presto | String | 
| Snowflake | Semistructured data type object and array | 
| Hive | String | 

## Syntax


```
parseJson(fieldName, path)
```

## Arguments


 *fieldName*   
The field containing the JSON object that you want to parse.

 *path*   
The path to the data element you want to parse from the JSON object. Only letters, numbers, and blank spaces are supported in the path argument. Valid path syntax includes:  
+ *\$1* – Root object
+ *.* – Child operator
+ *[ ]* – Subscript operator for array

## Return type


String

## Example


The following example evaluates incoming JSON to retrieve a value for item quantity. By using this during data preparation, you can create a table out of the JSON.

```
parseJson({jsonField}, “$.items.qty”)
```

The following shows the JSON.

```
{
    "customer": "John Doe",
    "items": {
        "product": "Beer",
        "qty": 6
    },
    "list1": [
        "val1",
        "val2"
    ],
    "list2": [
        {
            "list21key1": "list1value1"
        }
    ]
}
```

For this example, the following value is returned.

```
6
```

## Example


The following example evaluates `JSONObject1` to extract the first key value pair (KVP), labeled `"State"`, and assign the value to the calculated field that you are creating.

```
parseJson(JSONObject1, “$.state”)
```

The following are the given field values.

```
JSONObject1
-----------
{"State":"New York","Product":"Produce","Date Sold":"1/16/2018","Sales Amount":"$3423.39"}
{"State":"North Carolina","Product":"Bakery Products","Date Sold":"2/1/2018","Sales Amount":"$3226.42"}
{"State":"Utah","Product":"Water","Date Sold":"4/24/2018","Sales Amount":"$7001.52"}
```

For these field values, the following rows are returned.

```
New York
North Carolina
Utah
```

# Replace
Replace

`replace` replaces part of a string with another string that you specify. 

## Syntax


```
replace(expression, substring, replacement)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters in *expression* that you want to replace. The substring can occur one or more times in *expression*.

 *replacement*   
The string you want to have substituted for *substring*.

## Return type


String

## Example


The following example replaces the substring 'and' with 'or'.

```
replace('1 and 2 and 3', 'and', 'or')
```

The following string is returned.

```
1 or 2 or 3
```

# Right
Right

`right` returns the rightmost characters from a string, including spaces. You specify the number of characters to be returned.

## Syntax


```
right(expression, limit)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *limit*   
The number of characters to be returned from *expression*, starting from the last character in the string.

## Return type


String

## Example


The following example returns the last five characters from a string.

```
right('Seattle Store#14', 12)
```

The following value is returned.

```
tle Store#14
```

# Round


`round` rounds a decimal value to the closest integer if no scale is specified, or to the closest decimal place if scale is specified.

## Syntax


```
round(decimal, scale)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

 *scale*   
The number of decimal places to use for the return values.

## Return type



| Operand | Return type in the legacy data preparation experience | Return type in the new data preparation experience | 
| --- | --- | --- | 
|  INT  |  DECIMAL(FIXED)  |  DECIMAL(FIXED)  | 
|  DECIMAL(FIXED)  |  DECIMAL(FIXED)  |  DECIMAL(FIXED)  | 
|  DECIMAL(FLOAT)  |  DECIMAL(FIXED)  |  DECIMAL(FLOAT)  | 

## Example


The following example rounds a decimal field to the closest second decimal place.

```
round(salesAmount, 2)
```

The following are the given field values.

```
20.1307
892.0388
57.5447
```

For these field values, the following values are returned.

```
20.13
892.04
58.54
```

# Rtrim
Rtrim

`rtrim` removes following blank space from a string. 

## Syntax


```
rtrim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the following spaces from a string.

```
rtrim('Seattle Store #14   ')
```

For these field values, the following values are returned.

```
Seattle Store #14
```

# Split
Split

`split` splits a string into an array of substrings, based on a delimiter that you choose, and returns the item specified by the position.

You can only add `split` to a calculated field during data preparation, not to an analysis. This function is not supported in direct queries to Microsoft SQL Server.

## Syntax


```
split(expression, delimiter , position)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street;1402 35th Ave;1818 Elm Ct;11 Janes Lane'**, or a call to another function that outputs a string.

 *delimiter*   
The character that delimits where the string is broken into substrings. For example, `split('one|two|three', '|', 2)` becomes the following.  

```
one
two
three
```
If you choose `position = 2`, `split` returns `'two'`.

 *position*   
(Required) The position of the item to return from the array. The position of the first item in the array is 1.

## Return type


String array

## Example


The following example splits a string into an array, using the semicolon character (;) as the delimiter, and returns the third element of the array.

```
split('123 Test St;1402 35th Ave;1818 Elm Ct;11 Janes Lane', ';', 3)
```

The following item is returned.

```
1818 Elm Ct
```

This function skips items containing null values or empty strings. 

# Sqrt


`sqrt` returns the square root of a given expression. 

## Syntax


```
sqrt(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# startsWith
startsWith

`startsWith` evaluates if the expression starts with a substring that you specify. If the expression starts with the substring, `startsWith` returns true, and otherwise it returns false.

## Syntax


```
startsWith(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` startsWith **New**.

```
startsWith(state_nm, "New")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` startsWith **new**.

```
startsWith(state_nm, "new", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The `startsWith` function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if state\$1nm starts with **New**.

```
sumIf(Sales,startsWith(state_nm, "New"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not start with the specified substring. 

```
NOT(startsWith(state_nm, "New"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
startsWith(state_nm, toString(5) )
```

# Strlen
Strlen

`strlen` returns the number of characters in a string, including spaces.

## Syntax


```
strlen(expression)
```

## Arguments


 *expression*   
An expression can be the name of a field that uses the string data type like **address1**, a literal value like **'Unknown'**, or another function like `substring(field_name,0,5)`.

## Return type


Integer

## Example


The following example returns the length of the specified string.

```
strlen('1421 Main Street')
```

The following value is returned.

```
16
```

# Substring
Substring

`substring` returns the characters in a string, starting at the location specified by the *start* argument and proceeding for the number of characters specified by the *length* arguments. 

## Syntax


```
substring(expression, start, length)
```

## Arguments


 *expression*   
An expression can be the name of a field that uses the string data type like **address1**, a literal value like **'Unknown'**, or another function like `substring(field_name,1,5)`.

 *start*   
The character location to start from. *start* is inclusive, so the character at the starting position is the first character in the returned value. The minimum value for *start* is 1. 

 *length*   
The number of additional characters to include after *start*. *length* is inclusive of *start*, so the last character returned is (*length* - 1) after the starting character.

## Return type


String

## Example


The following example returns the 13th through 19th characters in a string. The beginning of the string is index 1, so you begin counting at the first character.

```
substring('Fantasy and Science Fiction',13,7)
```

The following value is returned.

```
Science
```

# switch


`switch` compares a *condition-expression* with the literal labels, within a set of literal label and *return-expression* pairings. It then returns the *return-expression* corresponding to the first literal label that's equal to the *condition-expression*. If no label equals to the *condition-expression*, `switch` returns the *default-expression*. Every *return-expression* and *default-expression* must have the same datatype.

## Syntax


```
switch(condition-expression, label-1, return-expression-1 [, label-n, return-expression-n ...], 
        default-expression)
```

## Arguments


`switch` requires one or more *if*,*then* expression pairings, and requires exactly one expression for the *else* argument. 

 *condition-expression*   
The expression to be compared with the label-literals. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. 

 *label*   
The literal to be compared with the *condition-expression* argument, all of the literals must have the same data type as *condition-expression* argument. `switch` accepts up to 5000 labels. 

 *return-expression*   
The expression to return if the value of its label equals to the value of the *condition-expression*. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. All of the *return-expression* arguments must have the same data type as the *default-expression*.

 *default-expression*   
The expression to return if no value of any label arguments equals to the value of *condition-expression*. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. The *default-expression* must have the same data type as all of the *return-expression* arguments.

## Return type


`switch` returns a value of the same data type as the values in *return-expression*. All data returned *return-expression* and *default-expression* must be of the same data type or be converted to the same data type. 

## General Examples


The following example returns the Amazon Web Services Region code of input region name. 

```
switch(region_name, 
               "US East (N. Virginia)", "us-east-1", 
               "Europe (Ireland)", "eu-west-1", 
               "US West (N. California)", "us-west-1", 
               "other regions")
```

The following are the given field values.

```
"US East (N. Virginia)"
        "US West (N. California)"
        "Asia Pacific (Tokyo)"
```

For these field values the following values are returned.

```
"us-east-1"
        "us-west-1"
        "other regions"
```

## Use switch to replace `ifelse`


The following `ifelse` use case is an equivalent of the previous example, for `ifelse` evaluating whether values of one field equals to different literal values, using `switch` instead is a better choice.

```
ifelse(region_name = "US East (N. Virginia)", "us-east-1", 
               region_name = "Europe (Ireland)", "eu-west-1", 
               region_name = "US West (N. California)", "us-west-1", 
               "other regions")
```

## Expression as return value


The following example uses expressions in *return-expressions*:

```
switch({origin_city_name}, 
               "Albany, NY", {arr_delay} + 20, 
               "Alexandria, LA", {arr_delay} - 10,
               "New York, NY", {arr_delay} * 2, 
               {arr_delay})
```

The preceding example changes the expected delay time for each flight from a particular city.

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/switch-function-example.png)


# toLower
toLower

`toLower` formats a string in all lowercase. `toLower` skips rows containing null values.

## Syntax


```
toLower(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example converts a string value into lowercase.

```
toLower('Seattle Store #14')
```

The following value is returned.

```
seattle store #14
```

# toString
toString

`toString` formats the input expression as a string. `toString` skips rows containing null values.

## Syntax


```
toString(expression)
```

## Arguments


 *expression*   
 An expression can be a field of any data type, a literal value like **14.62**, or a call to another function that returns any data type.

## Return type


String

## Example


The following example returns the values from `payDate` (which uses the `date` data type) as strings.

```
toString(payDate)
```

The following are the given field values.

```
payDate
--------
1992-11-14T00:00:00.000Z
2012-10-12T00:00:00.000Z
1973-04-08T00:00:00.000Z
```

For these field values, the following rows are returned.

```
1992-11-14T00:00:00.000Z
2012-10-12T00:00:00.000Z
1973-04-08T00:00:00.000Z
```

# toUpper
toUpper

`toUpper` formats a string in all uppercase. `toUpper` skips rows containing null values.

## Syntax


```
toUpper(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example converts a string value into uppercase.

```
toUpper('Seattle Store #14')
```

The following value is returned.

```
SEATTLE STORE #14
```

# trim
trim

`trim` removes both preceding and following blank space from a string. 

## Syntax


```
trim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the following spaces from a string.

```
trim('   Seattle Store #14   ')
```

For these field values, the following values are returned.

```
Seattle Store #14
```

# truncDate


`truncDate` returns a date value that represents a specified portion of a date. For example, requesting the year portion of the value 2012-09-02T00:00:00.000Z returns 2012-01-01T00:00:00.000Z. Specifying a time-related period for a date that doesn't contain time information returns the initial date value unchanged.

## Syntax


```
truncDate('period', date)
```

## Arguments


 *period*   
The period of the date that you want returned. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the date of the first day of the quarter that the date belongs to. 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

 *date*   
A date field or a call to another function that outputs a date.

## Return type


Date

## Example


The following example returns a date representing the month of the order date.

```
truncDate('MM', orderDate)
```

The following are the given field values.

```
orderDate      
=========
2012-12-14T00:00:00.000Z  
2013-12-29T00:00:00.000Z
2012-11-15T00:00:00.000Z
```

For these field values, the following values are returned.

```
2012-12-01T00:00:00.000Z
2013-12-01T00:00:00.000Z
2012-11-01T00:00:00.000Z
```

# Aggregate functions


Aggregate functions are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition.

When a calculated field formula contains an aggregation, it becomes a custom aggregation. To make sure that your data is accurately displayed, Amazon Quick applies the following rules:
+ Custom aggregations can't contain nested aggregate functions. For example, this formula doesn't work: `sum(avg(x)/avg(y))`. However, nesting nonaggregated functions inside or outside aggregate functions does work. For example, `ceil(avg(x))` works. So does `avg(ceil(x))`.
+ Custom aggregations can't contain both aggregated and nonaggregated fields, in any combination. For example, this formula doesn't work: `Sum(sales)+quantity`.
+ Filter groups can't contain both aggregated and nonaggregated fields.
+ Custom aggregations can't be converted to a dimension. They also can't be dropped into the field well as a dimension.
+ In a pivot table, custom aggregations can't be added to table calculations.
+ Scatter plots with custom aggregations need at least one dimension under **Group/Color** in the field wells.

For more information about supported functions and operators, see [Calculated field function and operator reference for Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/calculated-field-reference.html). 

The aggregate functions for calculated fields in Quick include the following.

**Topics**
+ [

# avg
](avg-function.md)
+ [

# avgIf
](avgIf-function.md)
+ [

# count
](count-function.md)
+ [

# countIf
](countIf-function.md)
+ [

# distinct\$1count
](distinct_count-function.md)
+ [

# distinct\$1countIf
](distinct_countIf-function.md)
+ [

# max
](max-function.md)
+ [

# maxIf
](maxIf-function.md)
+ [

# median
](median-function.md)
+ [

# medianIf
](medianIf-function.md)
+ [

# min
](min-function.md)
+ [

# minIf
](minIf-function.md)
+ [

# percentile
](percentile-function.md)
+ [

# percentileCont
](percentileCont-function.md)
+ [

# percentileDisc (percentile)
](percentileDisc-function.md)
+ [

# periodToDateAvg
](periodToDateAvg-function.md)
+ [

# periodToDateCount
](periodToDateCount-function.md)
+ [

# periodToDateMax
](periodToDateMax-function.md)
+ [

# periodToDateMedian
](periodToDateMedian-function.md)
+ [

# periodToDateMin
](periodToDateMin-function.md)
+ [

# periodToDatePercentile
](periodToDatePercentile-function.md)
+ [

# periodToDatePercentileCont
](periodToDatePercentileCont-function.md)
+ [

# periodToDateStDev
](periodToDateStDev-function.md)
+ [

# periodToDateStDevP
](periodToDateStDevP-function.md)
+ [

# periodToDateSum
](periodToDateSum-function.md)
+ [

# periodToDateVar
](periodToDateVar-function.md)
+ [

# periodToDateVarP
](periodToDateVarP-function.md)
+ [

# stdev
](stdev-function.md)
+ [

# stdevp
](stdevp-function.md)
+ [

# stdevIf
](stdevIf-function.md)
+ [

# stdevpIf
](stdevpIf-function.md)
+ [

# sum
](sum-function.md)
+ [

# sumIf
](sumIf-function.md)
+ [

# var
](var-function.md)
+ [

# varIf
](varIf-function.md)
+ [

# varp
](varp-function.md)
+ [

# varpIf
](varpIf-function.md)

# avg


The `avg` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avg(salesAmount)` returns the average for that measure grouped by the (optional) chosen dimension.

## Syntax


```
avg(decimal, [group-by level])
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the average sales.

```
avg({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
avg({Sales}, [{Country}])
```

![\[Average sales numbers are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/avg-function-example.png)


# avgIf


Based on a conditional statement, the `avgIf` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avgIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the average for that measure grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
avgIf(dimension or measure, condition) 
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# count


The `count` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `count(product type)` returns the total number of product types grouped by the (optional) chosen dimension, including any duplicates. The `count(sales)` function returns the total number of sales completed grouped by the (optional) chosen dimension, for example salesperson.

## Syntax


```
count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the count of sales by a specified dimension in the visual. In this example, the count of sales by month are shown.

```
count({Sales})
```

![\[The count of sales by month.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the count of sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
count({Sales}, [{Country}])
```

![\[Count of sales are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example2.png)


# countIf


Based on a conditional statement, the `countIf` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions.

## Syntax


```
countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Return type


Integer

## Example


The following function returns a count of the sales transactions (`Revenue`) that meet the conditions, including any duplicates. 

```
countIf (
    Revenue,
    # Conditions
        CalendarDay >= ${BasePeriodStartDate} AND 
        CalendarDay <= ${BasePeriodEndDate} AND 
        SourcingType <> 'Indirect'
)
```

# distinct\$1count


The `distinct_count` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_count(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_count(ship date)` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region.

## Syntax


```
distinct_count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Example


The following example calculates the total number of dates when products were ordered grouped by the (optional) chosen dimension in the visual, for example region.

```
distinct_count({Order Date})
```

![\[The total number of dates when products were ordered in each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region) in the visual.

```
distinct_count({Order Date}, [Country])
```

![\[The total number of dates when products were ordered in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example2.png)


# distinct\$1countIf


Based on a conditional statement, the `distinct_countIf` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_countIf(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_countIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region, if the condition evaluates to true.

## Syntax


```
distinct_countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# max


The `max` function returns the maximum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `max(sales goal)` returns the maximum sales goals grouped by the (optional) chosen dimension.

## Syntax


```
max(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Maximum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the max sales value for each region. It is compared to the total, minimum, and median sales values.

```
max({Sales})
```

![\[The maximum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the max sales at the Country level, but not across other dimensions (Region) in the visual.

```
max({Sales}, [Country])
```

![\[The maximum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/max-function-example2.png)


# maxIf


Based on a conditional statement, the `maxIf` function returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `maxIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the maximum sales goals grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
maxIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# median


The `median` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `median(revenue)` returns the median revenue grouped by the (optional) chosen dimension. 

## Syntax


```
median(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the median sales value for each region. It is compared to the total, maximum, and minimum sales.

```
median({Sales})
```

![\[The median sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the median sales at the Country level, but not across other dimensions (Region) in the visual.

```
median({Sales}, [Country])
```

![\[The median sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/median-function-example2.png)


# medianIf


Based on a conditional statement, the `medianIf` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `medianIf(Revenue,SaleDate >= ${BasePeriodStartDate} AND SaleDate <= ${BasePeriodEndDate})` returns the median revenue grouped by the (optional) chosen dimension, if the condition evaluates to true. 

## Syntax


```
medianIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# min


The `min` function returns the minimum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `min(return rate)` returns the minimum rate of returns grouped by the (optional) chosen dimension.

## Syntax


```
min(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Minimum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the minimum sales value for each region. It is compared to the total, max, and median sales.

```
min({Sales})
```

![\[The minimum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the minimum sales at the Country level, but not across other dimensions (Region) in the visual.

```
min({Sales}, [Country])
```

![\[The minimum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-function-example2.png)


# minIf


Based on a conditional statement, the `minIf` function returns the minimum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `minIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the minimum rate of returns grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
minIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# percentile


The `percentile` function calculates the percentile of the values in measure, grouped by the dimension that's in the field well. There are two varieties of percentile calculation available in Quick:
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html) uses linear interpolation to determine result.
+ [percentileDisc (percentile)](https://docs.amazonaws.cn/quicksight/latest/user/percentileDisc-function.html) uses actual values to determine result. 

The `percentile` function is an alias of `percentileDisc`.

# percentileCont


The `percentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure. It uses the grouping and sorting that are applied in the field wells. It answers questions like: What values are representative of this percentile? To return an exact percentile value that might not be present in your dataset, use `percentileCont`. To return the nearest percentile value that is present in your dataset, use `percentileDisc` instead.

## Syntax


```
percentileCont(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


The `percentileCont` function calculates a result based on a continuous distribution of the values from a specified measure. The result is computed by linear interpolation between the values after ordering them based on settings in the visual. It's different from `percentileDisc`, which simply returns a value from the set of values that are aggregated over. The result from `percentileCont` might or might not exist in the values from the specified measure.

## Examples of percentileCont
Examples

The following examples help explain how percentileCont works.

**Example Comparing median, `percentileCont`, and `percentileDisc`**  
The following example shows the median for a dimension (category) by using the `median`, `percentileCont`, and `percentileDisc` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays a value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
  example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileCont` values for the `example` field. The calculated fields `n%Cont` are defined as `percentileCont( {example} ,n)`. The interpolated values in each column represent the numbers that fall into that percentile bucket. In some cases, the actual data values match the interpolated values. For example, the column `100%Cont` shows the same value for every row because 6783.02 is the highest number.  

```
 example      50%Cont     75%Cont      99%Cont    100%Cont  
 --------- ----------- ----------- ------------ ----------- 

 A             20.97       84.307      699.99      6783.02  
 B             20.99       88.84       880.98      6783.02  
 C             20.99       90.48       842.925     6783.02  
 D             21.38       85.99       808.49      6783.02
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentileCont({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentileCont-function-example-lac.png)


# percentileDisc (percentile)


The `percentileDisc` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The `percentile` function is an alias of `percentileDisc`.

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDisc`. To return an exact percentile value that might not be present in your dataset, use `percentileCont` instead. 

## Syntax


```
percentileDisc(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


`percentileDisc` is an inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification and returns an element from the given set. 

For a given percentile value `P`, `percentileDisc` uses the sorted values in the visual and returns the value with the smallest cumulative distribution value that is greater than or equal to `P`. 

## Examples of percentileDisc
Examples

The following examples help explain how percentileDisc works.

**Example Comparing median, `percentileDisc`, and `percentileCont`**  
The following example shows the median for a dimension (category) by using the `percentileCont`, and `percentileDisc`, and `median` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays the closest value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
 example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileDisc` values for the `example` field. The calculated fields `n%Disc` are defined as `percentileDisc( {example} ,n)`. The values in each column are actual numbers that come from the dataset.   

```
 example     50%Disc      75%Disc        99%Disc      100%Disc
 -------- ----------- ------------ -------------- ------------ 
 A            20.97        73.98         699.99       6783.02
 B            42.19        88.84         820.08       6783.02
 C            30.52        90.48         733.44       6783.02
 D            41.38        85.99         901.29       6783.0
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentile({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentile-function-example-lac.png)


# periodToDateAvg


The `periodToDateAvg` function averages the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateAvg(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateAvg(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDAvgResults.png)


# periodToDateCount


The `periodToDateCount` function calculates the number of values in a dimension or measure, including duplicates, for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateCount(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateCount(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDCountResults.png)


# periodToDateMax


The `periodToDateMax` function returns the maximum value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that point.

## Syntax


```
periodToDateMax(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMax(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMaxResults.png)


# periodToDateMedian


The `periodToDateMedian` function returns the median value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMedian(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMedian(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMedianResults.png)


# periodToDateMin


The `periodToDateMin` function returns the minimum value of the specified measure or date, or a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMin(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMin(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMinResults.png)


# periodToDatePercentile


The `periodToDatePercentile` function calculates the percentile based on the actual numbers in measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period. It uses the grouping and sorting that are applied in the field wells.

To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile`. To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont` instead.

## Syntax


```
periodToDatePercentile(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example. that is 06-27-21.

```
periodToDatePercentile(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDPercentileResults.png)


# periodToDatePercentileCont


The `periodToDatePercentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure for a given time granularity (for instance, a quarter) up to a point in time in that period. It uses the grouping and sorting that are applied in the field wells.

To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont`. To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile` instead.

## Syntax


```
periodToDatePercentileCont(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDatePercentileCont(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDContPercentileResults.png)


# periodToDateStDev


The `periodToDateStDev` function calculates the standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, based on a sample and relative to that period.

## Syntax


```
periodToDateStDev(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDev(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevResults.png)


# periodToDateStDevP


The `periodToDateStDevP` function calculates the population standard deviation of the set of numbers in the specified measure, for a given time granularity (for instance, a quarter) up to a point in time, based on a sample in that period.

## Syntax


```
periodToDateStDevP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDevP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevPResults.png)


# periodToDateSum


The `periodToDateSum` function adds the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateSum(
	measure, 
	dateTime, 
	period, 
	endDate)
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following function calculates the week to date sum of fare amount per payment, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateSum(fare_amount, pickUpDateTime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results for the example, with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDSumResults.png)


# periodToDateVar


The `periodToDateVar` function calculates the sample variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time in that period.

## Syntax


```
periodToDateVar(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVar(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarResults.png)


# periodToDateVarP


The `periodToDateVarP` function calculates the population variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relevant to that period.

## Syntax


```
periodToDateVarP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVarP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarPResults.png)


# stdev


The `stdev` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
stdev(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class, using a sample of the test scores recorded.

```
stdev({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
stdev({Score}, [Subject])
```

# stdevp


The `stdevp` function calculates the population standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
stdevp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class using all the scores recorded.

```
stdevp({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual using all the scores recorded.

```
stdevp({Score}, [Subject])
```

# stdevIf


Based on a conditional statement, the `stdevIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample. 

## Syntax


```
stdevIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# stdevpIf


Based on a conditional statement, the `stdevpIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
stdevpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# sum


The `sum` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sum(profit amount)` returns the total profit amount grouped by the (optional) chosen dimension.

## Syntax


```
sum(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the sum of sales.

```
sum({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the sum of sales at the Country level, but not across other dimensions (Region and Product) in the visual.

```
sum(Sales, [Country])
```

![\[The sum of sales for each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sum-function-example.png)


# sumIf


Based on a conditional statement, the `sumIf` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sumIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the total profit amount grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
sumIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Examples


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB`.

```
sumIf(Sales, Segment=’SMB’)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sumIfCalc.png)


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB` and `Order Date` greater than year 2022.

```
sumIf(Sales, Segment=’SMB’ AND {Order Date} >=’2022-01-01’)
```

# var


The `var` function calculates the sample variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
var(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a sample of test scores.

```
var({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a sample of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
var({Scores}, [Subject]
```

# varIf


Based on a conditional statement, the `varIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
varIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# varp


The `varp` function calculates the population variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
varp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a population of test scores.

```
varp({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a population test scores at the subject level, but not across other dimensions (Class) in the visual.

```
varp({Scores}, [Subject]
```

# varpIf


Based on a conditional statement, the `varpIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
varpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# Table calculation functions


When you are analyzing data in a specific visual, you can apply table calculations to the current set of data to discover how dimensions influence measures or each other. *Visualized data* is your result set based on your current dataset, with all the filters, field selections, and customizations applied. To see exactly what this result set is, you can export your visual to a file. A *table calculation function* performs operations on the data to reveal relationships between fields. 

In this section, you can find a list of the functions available in table calculations that you can perform on visualized data in Amazon Quick. 

To view a list of functions sorted by category, with brief definitions, see [Functions by category](https://docs.amazonaws.cn/quicksight/latest/user/functions-by-category.html). 

**Topics**
+ [

# difference
](difference-function.md)
+ [

# distinctCountOver
](distinctCountOver-function.md)
+ [

# lag
](lag-function.md)
+ [

# lead
](lead-function.md)
+ [

# percentDifference
](percentDifference-function.md)
+ [

# avgOver
](avgOver-function.md)
+ [

# countOver
](countOver-function.md)
+ [

# maxOver
](maxOver-function.md)
+ [

# minOver
](minOver-function.md)
+ [

# percentileOver
](percentileOver-function.md)
+ [

# percentileContOver
](percentileContOver-function.md)
+ [

# percentileDiscOver
](percentileDiscOver-function.md)
+ [

# percentOfTotal
](percentOfTotal-function.md)
+ [

# periodOverPeriodDifference
](periodOverPeriodDifference-function.md)
+ [

# periodOverPeriodLastValue
](periodOverPeriodLastValue-function.md)
+ [

# periodOverPeriodPercentDifference
](periodOverPeriodPercentDifference-function.md)
+ [

# periodToDateAvgOverTime
](periodToDateAvgOverTime-function.md)
+ [

# periodToDateCountOverTime
](periodToDateCountOverTime-function.md)
+ [

# periodToDateMaxOverTime
](periodToDateMaxOverTime-function.md)
+ [

# periodToDateMinOverTime
](periodToDateMinOverTime-function.md)
+ [

# periodToDateSumOverTime
](periodToDateSumOverTime-function.md)
+ [

# stdevOver
](stdevOver-function.md)
+ [

# stdevpOver
](stdevpOver-function.md)
+ [

# varOver
](varOver-function.md)
+ [

# varpOver
](varpOver-function.md)
+ [

# sumOver
](sumOver-function.md)
+ [

# denseRank
](denseRank-function.md)
+ [

# rank
](rank-function.md)
+ [

# percentileRank
](percentileRank-function.md)
+ [

# runningAvg
](runningAvg-function.md)
+ [

# runningCount
](runningCount-function.md)
+ [

# runningMax
](runningMax-function.md)
+ [

# runningMin
](runningMin-function.md)
+ [

# runningSum
](runningSum-function.md)
+ [

# firstValue
](firstValue-function.md)
+ [

# lastValue
](lastValue-function.md)
+ [

# windowAvg
](windowAvg-function.md)
+ [

# windowCount
](windowCount-function.md)
+ [

# windowMax
](windowMax-function.md)
+ [

# windowMin
](windowMin-function.md)
+ [

# windowSum
](windowSum-function.md)

# difference


The `difference` function calculates the difference between a measure based on one set of partitions and sorts, and a measure based on another. 

The `difference` function is supported for use with analyses based on SPICE and direct query data sets.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
difference
	(
	     measure 
	     ,[ sortorder_field ASC_or_DESC, ... ]
	     ,lookup_index,
	     ,[ partition field, ... ] 
	)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB and Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the difference between of `sum({Billed Amount})`, sorted by `Customer Region` ascending, compared to the next row, and partitioned by `Service Line`.

```
difference(
     sum( {Billed Amount} ), 
     [{Customer Region} ASC],
     1,
     [{Service Line}]
)
```

The following example calculates the difference between `Billed Amount` compared to the next line, partitioned by (`[{Customer Region}]`). The fields in the table calculation are in the field wells of the visual.

```
difference(
     sum( {Billed Amount} ), 
     [{Customer Region} ASC],
     1
)
```

The red highlights show how each amount is added ( a \$1 b = c ) to show the difference between amounts a and c. 

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/differenceCalc.png)


# distinctCountOver


The `distinctCountOver` function calculates the distinct count of the operand partitioned by the specified attributes at a specified level. Supported levels are `PRE_FILTER` and `PRE_AGG`. The operand must be unaggregated.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
distinctCountOver
(
  measure or dimension field 
  ,[ partition_field, ... ]  
  ,calculation level 
)
```

## Arguments


 *measure or dimension field*   
The measure or dimension that you want to do the calculation for, for example `{Sales Amt}`. Valid values are `PRE_FILTER` and `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
This value defaults to `POST_AGG_FILTER` when blank. `POST_AGG_FILTER` is not a valid level for this operation and will result in an error message. For more information, see [Using level-aware calculations in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the distinct count of `Sales` partitioned over `City` and `State` at the `PRE_AGG` level.

```
distinctCountOver
(
  Sales, 
  [City, State], PRE_AGG
)
```

# lag


The `lag` function calculates the lag (previous) value for a measure based on specified partitions and sorts.

`lag` is supported for use with analyses based on SPICE and direct query data sets.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lag
(
lag
(
 measure
 ,[ sortorder_field ASC_or_DESC, ... ] 
 ,lookup_index
 ,[ partition_field, ... ] 
)] 
)
```

## Arguments


*measure*   
The measure that you want to get the lag for. This can include an aggregate, for example `sum({Sales Amt})`.

*sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

*lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB, and Amazon Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the previous `sum(sales)`, partitioned by the state of origin, in the ascending sort order on `cancellation_code`.

```
lag
(
     sum(Sales), 
     [cancellation_code ASC], 
     1, 
     [origin_state_nm]
)
```

The following example uses a calculated field with `lag` to display sales amount for the previous row next to the amount for the current row, sorted by `Order Date`. The fields in the table calculation are in the field wells of the visual.

```
lag(
     sum({Sales}),
     [{Order Date} ASC],
     1
)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/lagCalc.png)


The following example uses a calculated field with `lag` to display the sales amount for the previous row next to the amount for the current row, sorted by `Order Date` partitioned by `Segment`.

```
lag
	(
		sum(Sales),
		[Order Date ASC],
		1, [Segment]
	)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/lagCalc2.png)


# lead


The `lead` function calculates the lead (following) value for a measure based on specified partitions and sorts.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lead
(
     measure
     ,[ sortorder_field ASC_or_DESC, ... ]  
     ,lookup_index,
     ,[ partition_field, ... ]
)
```

## Arguments


*measure*   
The measure that you want to get the lead for. This can include an aggregate, for example `sum({Sales Amt})`.

*sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

*lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB, and Amazon Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the next `sum(sales)`, partitioned by the state of origin, in the ascending sort order on `cancellation_code`.

```
lead
(
     sum(sales), 
     [cancellation_code ASC], 
     1, 
     [origin_state_nm]
)
```

The following example uses a calculated field with lead to display the amount for the next row beside the amount for the current row, sorted by `Customer Segment`. The fields in the table calculation are in the field wells of the visual.

```
lead(
     sum({Billed Amount}),
     [{Customer Segment} ASC],
     1
)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/leadCalc.png)


# percentDifference


The `percentDifference` function calculates the percentage difference between the current value and a comparison value, based on partitions, sorts, and lookup index. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentDifference
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,lookup index
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the percent difference for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB and Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the percentage of difference between the `sum(Sales)` for the current and the previous `State`, sorted by `Sales`.

```
percentDifference
(
  sum(amount), 
  [sum(amount) ASC],
  -1, 
  [State]
)
```

The following example calculates the percent that a specific `Billed Amount` is of another `Billed Amount`, sorted by (`[{Customer Region} ASC]`). The fields in the table calculation are in the field wells of the visual.

```
percentDifference
(
  sum( {Billed Amount} ), 
  [{Customer Region} ASC],
  1
)
```

The following screenshot shows the results of the example. The red letters show that the total `Billed Amount` for the `Customer Region` **APAC** is 24 percent less than the amount for the **EMEA** region.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentDifference.png)


# avgOver


The `avgOver` function calculates the average of a measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
avgOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

The following example shows the average `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
avgOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the average of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/avgOver.png)


## Arguments


 *measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the average `sum(Sales)` partitioned over `City` and `State`. 

```
avgOver
(
     sum(Sales), 
     [City, State]
)
```

# countOver


The `countOver` function calculates the count of a dimension or measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
countOver
(
  measure or dimension field 
  ,[ partition_field, ... ]  
  ,calculation level 
)
```

## Arguments


 *measure or dimension field*   
The measure or dimension that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the count of `Sales` partitioned over `City` and `State`. 

```
countOver
(
  Sales, 
  [City, State]
)
```

The following example gets the count of `{County}` partitioned over `City` and `State`. 

```
countOver
(
  {County}, 
  [City, State]
)
```

The following example shows the count of `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
countOver
(
  sum({Billed Amount}),
  [{Customer Region}]
)
```

The following screenshot shows the results of the example. Because there are no other fields involved, the count is one for each region.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver1.png)


If you add additional fields, the count changes. In the following screenshot, we add `Customer Segment` and `Service Line`. Each of those fields contains three unique values. With 3 segments, 3 service lines, and 3 regions, the calculated field shows 9.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver2.png)


If you add the two additional fields to the partitioning fields in the calculated field, `countOver( sum({Billed Amount}), [{Customer Region}, {Customer Segment}, {Service Line}]`, then the count is again 1 for each row.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver.png)


# maxOver


The `maxOver` function calculates the maximum of a measure or date partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
maxOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


 *measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the maximum `sum(Sales)`, partitioned by `City` and `State`.

```
maxOver
(
     sum(Sales), 
     [City, State]
)
```

The following example shows the maximum `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
maxOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the maximum of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/maxOver.png)


# minOver


The `minOver` function calculates the minimum of a measure or date partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
minOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the min `sum(Sales)`, partitioned by `City` and `State`.

```
minOver
(     
     sum(Sales), 
     [City, State]
)
```

The following example shows the minimum `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
minOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the minimum of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/minOver.png)


# percentileOver


The `percentileOver` function calculates the *n*th percentile of a measure partitioned by a list of dimensions. There are two varieties of the `percentileOver` calculation available in Quick:
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html) uses linear interpolation to determine result.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html) uses actual values to determine result. 

The `percentileOver` function is an alias of `percentileDiscOver`.

# percentileContOver


The `percentileContOver` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The result is partitioned by the specified dimension at the specified calculation level. 

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDiscOver`. To return an exact percentile value that might not be present in your dataset, use `percentileContOver` instead. 

## Syntax


```
percentileContOver (
    measure
  , percentile-n
  , [partition-by, …]
  , calculation-level
)
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile-n*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *partition-by*   
(Optional) One or more dimensions that you want to partition by, separated by commas. Each field in the list is enclosed in \$1 \$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation-level*   
 Specifies where to perform the calculation in relation to the order of evaluation. There are three supported calculation levels:  
+ PRE\$1FILTER
+ PRE\$1AGG
+ POST\$1AGG\$1FILTER (default) – To use this calculation level, specify an aggregation on `measure`, for example `sum(measure)`.
PRE\$1FILTER and PRE\$1AGG are applied before the aggregation occurs in a visualization. For these two calculation levels, you can't specify an aggregation on `measure` in the calculated field expression. To learn more about calculation levels and when they apply, see [Order of evaluation in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/order-of-evaluation-quicksight.html) and [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Returns


The result of the function is a number. 

## Example of percentileContOver
Example

The following example helps explain how percentileContOver works.

**Example Comparing calculation levels for the median**  
The following example shows the median for a dimension (category) by using different calculation levels with the `percentileContOver` function. The percentile is 50. The dataset is filtered by a region field. The code for each calculated field is as follows:  
+ `example = left( category, 1 )` (A simplified example.)
+ `pre_agg = percentileContOver ( {Revenue} , 50 , [ example ] , PRE_AGG)`
+ `pre_filter = percentileContOver ( {Revenue} , 50 , [ example ] , PRE_FILTER) `
+ `post_agg_filter = percentileContOver ( sum ( {Revenue} ) , 50 , [ example ], POST_AGG_FILTER )`

```
example   pre_filter     pre_agg      post_agg_filter
------------------------------------------------------
0            106,728     119,667            4,117,579
1            102,898      95,946            2,307,547
2             97,807      93,963              554,570  
3            101,043     112,585            2,709,057
4             96,533      99,214            3,598,358
5            106,293      97,296            1,875,648
6             97,118      69,159            1,320,672
7            100,201      90,557              969,807
```

# percentileDiscOver


The `percentileDiscOver` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The result is partitioned by the specified dimension at the specified calculation level. The `percentileOver` function is an alias of `percentileDiscOver`.

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDiscOver`. To return an exact percentile value that might not be present in your dataset, use `percentileContOver` instead. 

## Syntax


```
percentileDiscOver (
     measure
   , percentile-n
   , [partition-by, …]
   , calculation-level
)
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile-n*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *partition-by*   
(Optional) One or more dimensions that you want to partition by, separated by commas. Each field in the list is enclosed in \$1 \$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation-level*   
 Specifies where to perform the calculation in relation to the order of evaluation. There are three supported calculation levels:  
+ PRE\$1FILTER
+ PRE\$1AGG
+ POST\$1AGG\$1FILTER (default) – To use this calculation level, you need to specify an aggregation on `measure`, for example `sum(measure)`.
PRE\$1FILTER and PRE\$1AGG are applied before the aggregation occurs in a visualization. For these two calculation levels, you can't specify an aggregation on `measure` in the calculated field expression. To learn more about calculation levels and when they apply, see [Order of evaluation in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/order-of-evaluation-quicksight.html) and [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Returns


The result of the function is a number. 

## Example of percentileDiscOver
Example

The following example helps explain how percentileDiscOver works.

**Example Comparing calculation levels for the median**  
The following example shows the median for a dimension (category) by using different calculation levels with the `percentileDiscOver` function. The percentile is 50. The dataset is filtered by a region field. The code for each calculated field is as follows:  
+ `example = left( category, 1 )` (A simplified example.)
+ `pre_agg = percentileDiscOver ( {Revenue} , 50 , [ example ] , PRE_AGG)`
+ `pre_filter = percentileDiscOver ( {Revenue} , 50 , [ example ] , PRE_FILTER) `
+ `post_agg_filter = percentileDiscOver ( sum ( {Revenue} ) , 50 , [ example ], POST_AGG_FILTER )`

```
example   pre_filter     pre_agg      post_agg_filter
------------------------------------------------------
0            106,728     119,667            4,117,579
1            102,898      95,946            2,307,547
2             97,629      92,046              554,570  
3            100,867     112,585            2,709,057
4             96,416      96,649            3,598,358
5            106,293      97,296            1,875,648
6             97,118      64,395            1,320,672
7             99,915      90,557              969,807
```

**Example The median**  
The following example calculates the median (the 50th percentile) of `Sales` partitioned by `City` and `State`.   

```
percentileDiscOver
(
  Sales, 
  50,
  [City, State]
)
```
The following example calculates the 98th percentile of `sum({Billed Amount})` partitioned by `Customer Region`. The fields in the table calculation are in the field wells of the visual.  

```
percentileDiscOver
(
  sum({Billed Amount}), 
  98,
  [{Customer Region}]
)
```
The following screenshot shows the how these two examples look on a chart.   

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentilOver-50-98.png)


# percentOfTotal


The `percentOfTotal` function calculates the percentage a measure contributes to the total, based on the dimensions specified. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentOfTotal
(
     measure 
     ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the percent of total for. Currently, the `distinct count` aggregation is not supported for `percentOfTotal`.

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example creates a calculation for the percent of total `Sales` contributed by each `State`.

```
percentOfTotal
(
     sum(Sales), 
     [State]
)
```

The following example calculates the percent that a specific `Billed Amount` is when compared to the total `Billed Amount`, partitioned by (`[{Service Line} ASC]`). The fields in the table calculation are in the field wells of the visual.

```
percentOfTotal
(
     sum( {Billed Amount} ), 
     [{Service Line}]
)
```

The following screenshot shows the results of the example. The red highlights show that the partition field with the value "`Billing`" has three entries, one for each region. The total billed amount for this service line is divided into three percentages, which total 100 percent. Percentages are rounded and might not always add up to exactly 100 percent.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentOfTotal.png)


# periodOverPeriodDifference


The `periodOverPeriodDifference` function calculates the difference of a measure over two different time periods as specified by period granularity and offset. Unlike a difference calculation, this function uses a date-based offset instead of a fixed sized offset. This ensures that only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodDifference(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to perform the periodOverPeriod calculation on.

 *dateTime*   
The Date dimension over which we are computing Period-Over-Period calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The defaults value is the visual date dimension granularity.

 *offset*   
(Optional) The offset can be a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
The default value is 1.

## Example


The following example uses a calculated field `PeriodOverPeriod` to display the sales amount difference of yesterday

```
periodOverPeriodDifference(sum(Sales), {Order Date})
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthDifference.png)


The following example uses a calculated field `PeriodOverPeriod` to display the sales amount difference of previous 2 months. Below example is comparing sales of `Mar2020` with `Jan2020`.

```
periodOverPeriodDifference(sum(Sales),{Order Date}, MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthDifference2.png)


# periodOverPeriodLastValue


The `periodOverPeriodLastValue` function calculates the last (previous) value of a measure from the previous time period as specified by the period granularity and offset. This function uses a date-based offset instead of a fixed sized offset. This ensures only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodLastValue(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for.

 *date*   
The date dimension over which you're computing periodOverPeriod calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
This argument defaults to the granularity of the visual aggregation

 *offset*   
(Optional) The offset can a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
This argument default value is 1.

## Example


The following example calculates the month over month value in sales with the visual dimension granularity and default offset of 1.

```
periodOverPeriodLastValue(sum(Sales), {Order Date})
```

The following example calculates the month over month value in sales with a fixed granularity of `MONTH` and fixed offset of 1.

```
periodOverPeriodLastValue(sum(Sales), {Order Date},MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthLastValue.png)


# periodOverPeriodPercentDifference


The `periodOverPeriodPercentDifference` function calculates the percent difference of a measure over two different time periods as specified by the period granularity and offset. Unlike percentDifference, this function uses a date-based offset instead of a fixed sized offset. This ensures only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodPercentDifference(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for.

 *date*   
The date dimension over which you're computing periodOverPeriod calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
This argument defaults to the granularity of the visual aggregation

 *offset*   
(Optional) The offset can a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
This argument default value is 1.

## Example


The following example calculates the month over month percent difference in sales with the visual dimension granularity and default offset of 1.

```
periodOverPeriodPercentDifference(sum(Sales),{Order Date})
```

The following example calculates the month over month percent difference in sales with a fixed granularity of `MONTH` and fixed offset of 1.

```
periodOverPeriodPercentDifference(sum(Sales), {Order Date}, MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthPercentDifference.png)


# periodToDateAvgOverTime


The `periodToDateAvgOverTime` function calculates the average of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateAvgOverTime(
	measure, 
	dateTime,
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following function calculates the average fare amount month over mont.

```
periodToDateAvgOverTime(sum({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDAvgOverTimeResults.png)


# periodToDateCountOverTime


The `periodToDateCountOverTime` function calculates the count of a dimension or measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateCountOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the count of vendors month over month.

```
periodToDateCountOverTime(count(vendorid), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDCountOverTimeResults.png)


# periodToDateMaxOverTime


The `periodToDateMaxOverTime` function calculates the maximum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateMaxOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the maximum fare amount month over month.

```
periodToDatemaxOverTime(max({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMaxOverTimeResults.png)


# periodToDateMinOverTime


The `periodToDateMinOverTime` function calculates the minimum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateMinOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the minimum fare amount month over month.

```
periodToDateMinOverTime(min({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMinOverTimeResults.png)


# periodToDateSumOverTime


The `periodToDateSumOverTime` function calculates the sum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateSumOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following function returns the total fare amount month over month.

```
periodToDateSumOverTime(sum({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDSumOverTime-example-results.png)


# stdevOver


The `stdevOver` function calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
stdevOver
(
      measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the standard deviation of `sum(Sales)`, partitioned by `City` and `State`, based on a sample..

```
stdevOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the standard deviation of `Billed Amount` over `Customer Region`, based on a sample. The fields in the table calculation are in the field wells of the visual.

```
stdevOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# stdevpOver


The `stdevpOver` function calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
stdevpOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the standard deviation of `sum(Sales)`, partitioned by `City` and `State`, based on a biased population.

```
stdevpOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the standard deviation of `Billed Amount` over `Customer Region`, based on a biased population. The fields in the table calculation are in the field wells of the visual.

```
stdevpOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# varOver


The `varOver` function calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
varOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the variance of `sum(Sales)`, partitioned by `City` and `State`, based on a sample.

```
varOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the variance of `Billed Amount` over `Customer Region`, based on a sample. The fields in the table calculation are in the field wells of the visual.

```
varOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# varpOver


The `varpOver` function calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
varpOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the variance of `sum(Sales)`, partitioned by `City` and `State`, based on a biased population.

```
varpOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the variance of `Billed Amount` over `Customer Region`, based on a biased population. The fields in the table calculation are in the field wells of the visual.

```
varpOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# sumOver


 The `sumOver` function calculates the sum of a measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
sumOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the sum of `sum(Sales)`, partitioned by `City` and `State`.

```
sumOver
(
     sum(Sales), 
     [City, State]
)
```

The following example sums `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
sumOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Customer Segment`, the total amount billed for each is summed for the `Customer Region`, and displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sumOver.png)


# denseRank


The `denseRank` function calculates the rank of a measure or a dimension in comparison to the specified partitions. It counts each item only once, ignoring duplicates, and assigns a rank "without holes" so that duplicate values share the same rank. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
denseRank
(
  [ sort_order_field ASC_or_DESC, ... ] 
  ,[ partition_field, ... ] 
)
```

## Arguments


 *sort order field*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example densely ranks `max(Sales)`, based on a descending sort order, by `State` and `City`. Any cities with the same `max(Sales)` are assigned the same rank, and the next city is ranked consecutively after them. For example, if three cities share the same ranking, the fourth city is ranked as second. 

```
denseRank
(
  [max(Sales) DESC], 
  [State, City]
)
```

The following example densely ranks `max(Sales)`, based on a descending sort order, by `State`. Any states with the same `max(Sales)` are assigned the same rank, and the next is ranked consecutively after them. For example, if three states share the same ranking, the fourth state is ranked as second. 

```
denseRank
(
  [max(Sales) DESC], 
  [State]
)
```

# rank


The `rank` function calculates the rank of a measure or a dimension in comparison to the specified partitions. It counts each item, even duplicates, once and assigns a rank "with holes" to make up for duplicate values. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
rank
(
  [ sort_order_field ASC_or_DESC, ... ]
  ,[ partition_field, ... ] 
)
```

## Arguments


 *sort order field*   
One or more aggregated measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example ranks `max(Sales)`, based on a descending sort order, by `State` and `City`, within the `State` of **WA**. Any cities with the same `max(Sales)` are assigned the same rank, but the next rank includes the count of all previously existing ranks. For example, if three cities share the same ranking, the fourth city is ranked as fourth. 

```
rank
(
  [max(Sales) DESC], 
  [State, City]
)
```

The following example ranks `max(Sales)`, based on an ascending sort order, by `State`. Any states with the same `max(Sales)` are assigned the same rank, but the next rank includes the count of all previously existing ranks. For example, if three states share the same ranking, the fourth state is ranked as fourth. 

```
rank
(
  [max(Sales) ASC], 
  [State]
)
```

The following example ranks `Customer Region` by total `Billed Amount`. The fields in the table calculation are in the field wells of the visual.

```
rank(
  [sum({Billed Amount}) DESC]
)
```

The following screenshot shows the results of the example, along with the total `Billed Amount` so you can see how each region ranks.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/rankCalc.png)


# percentileRank


The `percentileRank` function calculates the percentile rank of a measure or a dimension in comparison to the specified partitions. The percentile rank value(*x*) indicates that the current item is above *x*% of values in the specified partition. The percentile rank value ranges from 0 (inclusive) to 100 (exclusive). 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentileRank
(
      [ sort_order_field ASC_or_DESC, ... ] 
     ,[ {partition_field}, ... ]
)
```

## Arguments


 *sort order field*   
One or more aggregated measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example does a percentile ranking of `max(Sales)` in descending order, by `State`. 

```
percentileRank
(
     [max(Sales) DESC], 
     [State]
)
```

The following example does a percentile ranking of `Customer Region` by total `Billed Amount`. The fields in the table calculation are in the field wells of the visual.

```
percentileRank(
     [sum({Billed Amount}) DESC],
     [{Customer Region}]
)
```

The following screenshot shows the results of the example, along with the total `Billed Amount` so you can see how each region compares.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentileRank.png)


# runningAvg


The `runningAvg` function calculates a running average for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningAvg
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running average for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running average of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningAvg
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running average of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningAvg
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningCount


The `runningCount` function calculates a running count for a measure or dimension, based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningCount
(
  measure_or_dimension 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure or dimension*   
An aggregated measure or dimension that you want to see the running count for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running count of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningCount
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running count of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningCount
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningMax


The `runningMax` function calculates a running maximum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningMax
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running maximum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running maximum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningMax
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running maximum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningMax
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningMin


The `runningMin` function calculates a running minimum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningMin
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running minimum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running minimum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningMin
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running minimum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningMin
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningSum


The `runningSum` function calculates a running sum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningSum
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running sum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running sum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningSum
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running sum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningSum
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

The following screenshot shows the results of the example. The red labels show how each amount is added ( `a + b = c` ) to the next amount, resulting in a new total. 

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/runningSum.png)


# firstValue


The `firstValue` function calculates the first value of the aggregated measure or dimension partitioned and sorted by specified attributes.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
firstValue
	(
	     aggregated measure or dimension, 
	     [ sort_attribute ASC_or_DESC, ... ],
	     [ partition_by_attribute, ... ] 
	)
```

## Arguments


*aggregated measure or dimension*   
An aggregated measure or dimension that you want to see the first value for.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*partition by attribute*  
(Optional) One or more measure or dimensions that you want to partition by, separated by commas.  
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets). 

## Example


The following example calculates the first `Destination Airport`, sorted by `Flight Date`, partitioned by `Flight Date` ascending and `Origin Airport`.

```
firstValue(
    {Destination Airport}
    [{Flight Date} ASC],
    [
        {Origin Airport},
        {Flight Date}
    ]
)
```

# lastValue


The `lastValue` function calculates the last value of the aggregated measure or dimension partitioned and sorted by specified attributes.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lastValue
	(
	     aggregated measure or dimension,
	     [ sort_attribute ASC_or_DESC, ... ],
	     [ partition_by_attribute, ... ] 
	)
```

## Arguments


*aggregated measure or dimension*   
An aggregated measure or dimension that you want to see the last value for.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (`ASC`) or descending (`DESC`) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*partition by attribute*  
(Optional) One or more measures or dimensions that you want to partition by, separated by commas.  
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets). 

## Example


The following example calculates the last value for `Destination Airport`. This calculation is sorted by the `Flight Date` value and partitioned by the `Flight Date` value sorted in ascending order and the `Origin Airport` value.

```
lastValue(
    [{Destination Airport}],
    [{Flight Date} ASC],
    [
        {Origin Airport},
    	truncDate('DAY', {Flight Date})
    ]
)
```

# windowAvg


The `windowAvg` function calculates the average of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. For example, you can use `windowAvg` to calculate a moving average, which is often used to smooth out the noise in a line chart.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowAvg
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1 \$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving average of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes three rows above and two row below of the current row.

```
windowAvg
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     3,
            2
	)
```

The following screenshot shows the results of this moving average example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the moving average of revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowAvg.png)


# windowCount


The `windowCount` function calculates the count of the aggregated measure or dimension in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowCount
	(
	     measure_or_dimension 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure or dimension*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving count of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes three rows above and two row below of the current row.

```
windowCount
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     3,
               2
	)
```

# windowMax


The `windowMax` function calculates the maximum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. You can use `windowMax` to help you identify the maximum of the metric over a period time.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowMax
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the trailing 12-month maximum of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes 12 rows above and 0 row below of the current row.

```
windowMax
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     12,
               0
	)
```

The following screenshot shows the results of this trailing 12-month example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the trailing 12-month maximum revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowMax.png)


# windowMin


The `windowMin` function calculates the minimum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. You can use `windowMin` to help you identify the minimum of the metric over a period time.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowMin
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the trailing 12-month minimum of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes 12 rows above and 0 row below of the current row.

```
windowMin
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     12,
               0
	)
```

The following screenshot shows the results of this trailing 12-month example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the trailing 12-month minimum revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowMin.png)


# windowSum


The `windowSum` function calculates the sum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. 

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowSum
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the sum for, for example `sum({Revenue})`.   
For the engines MySQL, MariaDB, and Amazon Aurora with MySQL compatibility, the lookup index is limited to just 1. Window functions aren't supported for MySQL versions below 8 and MariaDB versions earlier than 10.2.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving sum of `sum(Revenue)`, sorted by `SaleDate`. The calculation includes two rows above and one row ahead of the current row.

```
windowSum
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     2,
            1
	)
```

The following example show a trailing 12-month sum. 

```
windowSum(sum(Revenue),[SaleDate ASC],12,0)
```

The following screenshot shows the results of this trailing 12-month sum example. The `sum(Revenue)` field is added to the chart to show the difference between the revenue and the trailing 12-month sum of revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowSum.png)


# Joining data


You can use the join interface in Amazon Quick Sight to join objects from one or more data sources. By using Amazon Quick Sight to join the data, you can merge disparate data without duplicating the data from different sources. 

## Types of joined datasets


A join is performed between two Quick Sight *logical tables*, where each logical table contains information about how to fetch data. When editing a dataset in Quick Sight, the join diagram at the top half of the page shows each logical table as a rectangular block.

There are two different types of joined datasets in Quick Sight: same-source and cross-source. A dataset is considered same-source when it doesn't have any joins, or when all of the following conditions are met:
+ If any of the logical tables refer to a Quick Sight data source:
  + All of the logical tables in this dataset must refer to the same Quick Sight data source. This doesn't apply if two separate Quick Sight data sources refer to the same underlying database. It must be the exact same Quick Sight data source. For more information about using a single data source, see [Creating a dataset using an existing data source](create-a-data-set-existing.md).
+ If any of the logical tables refer to a Quick Sight dataset that is a parent dataset:
  + The parent dataset must use direct query.
  + The parent dataset must refer to the same Quick Sight data source.

If the above conditions aren't met, the dataset is considered a cross-source join. 

## Facts about joining datasets


Both same-source and cross-source dataset joins have the following limitations.

### What's the maximum number of tables a joined dataset can contain?


All joined datasets can contain up to 32 tables.

### How large can joined data be?


The maximum allowed size of a join is determined by the query mode and query engine that is used. The list below provides information about the different size limits for the tables to be joined. The size limit applies to all secondary tables combined. There are no join size limits for the primary table.
+ **Same-source tables** – When tables originate from a single query data source, Quick Sight imposes no restrictions on the join size. This does not override join size limitations that the source query engine may have in place.
+ **Cross-source datasets** – This type of join contains tables from different data sources that aren't stored in SPICE. For these types of joins, Quick Sight automatically identifies the largest table in the dataset. The combined size of all other secondary tables must be less than 1 GB.
+ **Datasets stored in SPICE** – This type of join contains tables that are all ingested into SPICE. The combined size of all secondary tables in this join cannot exceed 20 GB.

For more information about SPICE dataset size calculations, see [Estimating the size of SPICE datasets](spice.md#spice-capacity-formula).

### Can a joined dataset use direct query?


Same-source datasets support direct query, assuming there are no other restrictions on using direct query. For example, S3 data sources don't support direct query, so a same-source S3 dataset must still use SPICE.

Cross-source datasets must use SPICE.

### Can calculated fields be used in a join?


All joined datasets can use calculated fields, but calculated fields can't be used in any on-clauses.

### Can geographical data be used in a join?


Same-source datasets support geographical data types, but geographical fields can't be used in any on-clauses.

Cross-source datasets don't support geographical data in any form.

For some examples of joining tables across data sources, see the [Joining across data sources on Amazon Quick Sight](https://aws.amazon.com/blogs/big-data/joining-across-data-sources-on-amazon-quicksight/) post on the Amazon Big Data Blog. 

## Creating a join


Use the following procedure to join tables to use in a dataset. Before you begin, import or connect to your data. You can create a join between any of the data sources supported by Amazon Quick Sight, except Internet of Things (IoT) data. For example, you can add comma-separated value (.csv) files, tables, views, SQL queries, or JSON objects in an Amazon S3 bucket.

**To add one or more joins**

1. Open the dataset that you want to work with.

1. (Optional) Before you get started, decide if you want to disable the autogenerated preview based on of a sample of your data. To turn that off, choose **Auto-preview** at top right. It's turned on by default.

1. If you haven't already chosen a query mode, choose **Query mode**. 

   Choose **SPICE** to store your dataset in [SPICE](spice.md), or choose **Direct query** to pull live data every time. If your dataset contains one ore more manually uploaded file, your dataset is automatically stored in SPICE.

   If you choose **SPICE**, the data is ingested into Quick Sight. Visuals that use the dataset run queries in SPICE, instead of on the database.

   If you choose **Direct query**, the data isn't ingested into SPICE. Visuals that use the dataset run queries on the database, instead of in SPICE. 

   If you choose **Query mode**, make sure to set unique keys in the join, if applicable, to improve performance when loading visuals.

1. On the data preparation page, choose **Add data**. 

1. In the **Add data page** that opens, choose one of the following options and complete the steps following: 
   + Add data from a dataset:

     1. Choose **Dataset**.

     1. Select a dataset from the list.

     1. Choose **Select**.
   + Add data from a data source:

     1. Choose **Data source**.

     1. Select a data source from the list.

     1. Choose **Select**.

     1. Select a table from the list.

     1. Choose **Select**.
   + Create self-joins by adding a table multiple times. A counter appears after the name. An example is **Product**, **Product (2)**, and **Product (3)**. Field names in the **Fields** or **Filters** sections include the same counter so you can know which instance of the table a field came from. 
   + Add a new file by choosing **Upload a file**, and then choose the file that you want to join.

1. (Optional) Choose **Use custom SQL** to open the query editor and write a query for a SQL data source.

1. (Optional) After you add data, interact with each table by choosing its menu icon. Rearrange the tables by dragging and dropping them. 

   An icon with red dots appears to indicate that you need to configure this join. Two red dots appear for joins that aren't yet configured. To create joins, choose the first join configuration icon. 

1. (Optional) To change an existing join, reopen **Join configuration** by choosing the join icon between two tables. 

   The **Join Configuration** pane opens. On the join interface, specify the join type and the fields to use to join the tables. 

1. At the bottom of the screen, you can see options to set a field in one table equal to a field in another table. 

   1. In the **Join clauses** section, choose the join column for each table. 

     (Optional) If the tables that you selected join on multiple columns, choose **Add a new join clause**. Doing this adds another row to the join clauses, so you can specify the next set of columns to join. Repeat this process until you have identified all of the join columns for the two data objects.

1. In the **Join configuration** pane, choose the kind of join to apply. If the join fields are a unique key for one or both tables, enable the unique key setting. Unique keys only apply to direct queries, not to SPICE data. 

   For more information about joins, see [Join types](#join-types).

1. Choose **Apply** to confirm your choices. 

   To cancel without making changes, choose **Cancel**.

1. The join icon in the workspace changes to show the new relationship.

1. (Optional) In the **Fields** section, you can use each field's menu to do one or more of the following:
   + **Add a hierarchy** to a geospatial field. 
   + **Include** or **Exclude** the field.
   + **Edit name & description** of the field.
   + **Change data type**.
   + **Add a calculation** (a calculated field).
   + **Restrict access to only me**, so only you can see it. This can be helpful when you are adding fields to a dataset that's already in use.

1. (Optional) In the **Filters** section, you can add or edit filters. For more information, see [Filtering data in Amazon Quick Sight](adding-a-filter.md).

## Join types


Amazon Quick Sight supports the following join types:
+ Inner joins
+ Left and right outer joins
+ Full outer joins

Let's take a closer look at what these join types do with your data. For our example data, we're using the following tables named `widget` and `safety rating`. 

```
SELECT * FROM safety-rating

rating_id	safety_rating
1		    A+
2		    A
3		    A-
4		    B+
5		    B

SELECT * FROM WIDGET

widget_id	   widget	safety_rating_id
1		    WidgetA		3
2		    WidgetB		1
3		    WidgetC		1
4		    WidgetD		2
5		    WidgetE
6		    WidgetF		5
7		    WidgetG
```

### Inner joins


Use an inner join when you want to see only the data where there is a match between two tables. For example, suppose that you perform an inner join on the **safety-rating** and **widget** tables.

In the following result set, widgets without safety ratings are removed, and safety ratings without associated widgets are removed. Only the rows that match perfectly are included.

```
SELECT * FROM safety-rating
INNER JOIN widget
ON safety_rating.rating_id = widget.safety_rating_id

rating_id    safety_rating    widget_id    widget        safety_rating_id
3	        A-                1        WidgetA        3
1	        A+                2        WidgetB        1
1	        A+                3        WidgetC        1
2	        A                 4        WidgetD        2
5	        B                 6        WidgetF        5
```

### Left and right outer joins


These are also known as left or right outer joins. Use a left or right outer join when you want to see all the data from one table, and only the matching rows from the other table. 

In a graphical interface, you can see which table is on the right or the left. In a SQL statement, the first table is considered to be on the left. Therefore, choosing a left outer join as opposed to a right outer join depends only on how the tables are laid out in your query tool.

For example, suppose that you perform a left outer join on `safety-rating` (the left table) and `widgets` (the right table). In this case, all `safety-rating` rows are returned, and only matching `widget` rows are returned. You can see blanks in the result set where there is no matching data.

```
SELECT * FROM safety-rating
LEFT OUTER JOIN widget
ON safety_rating.rating_id = widget.safety_rating_id

rating_id    safety_rating    widget_id   widget          safety_rating_id
1	        A+                2        WidgetB   	1
1	        A+                3        WidgetC   	1
2	        A                 4        WidgetD   	2
3	        A-                1        WidgetA   	3
4	        B+
5	        B                 6        WidgetF   	5
```

If you instead use a right outer join, call the tables in the same order so `safety-rating` is on the left and `widgets` is on the right. In this case, only matching `safety-rating` rows are returned, and all `widget` rows are returned. You can see blanks in the result set where there is no matching data.

```
SELECT * FROM safety-rating
RIGHT OUTER JOIN widget
ON safety_rating.rating_id = widget.safety_rating_id

rating_id    safety_rating    widget_id   widget          safety_rating_id
3	        A-                1	WidgetA   	 3
1	        A+                2	WidgetB   	 1
1	        A+                3	WidgetC   	 1
2	        A                 4	WidgetD   	 2
                                  5       WidgetE
5	        B                 6	WidgetF   	 5
                                  7       WidgetG
```

### Full outer joins


These are sometimes called just outer joins, but this term can refer to either a left outer, right outer, or full outer join. To define the meaning, we use the complete name: full outer join. 

Use a full outer join to see data that matches, plus data from both tables that doesn't match. This type of join includes all rows from both tables. For example, if you perform a full outer join on the `safety-rating` and `widget` tables, all rows are returned. The rows are aligned where they matched, and all extra data is included on separate rows. You can see blanks in the result set where there is no matching data.

```
SELECT * FROM safety-rating
FULL OUTER JOIN widget
ON safety_rating.rating_id = widget.safety_rating_id

rating_id    safety_rating    widget_id   widget         safety_rating_id
1	        A+                2	WidgetB   	1
1	        A+                3	WidgetC   	1
2	        A                 4	WidgetD   	2
3	        A-                1	WidgetA   	3
4	        B+
5	        B                 6	WidgetF   	5
                                  5	WidgetE
                                  7	WidgetG
```

# Preparing data fields for analysis in Amazon Quick Sight
Preparing data fields

Before you start analyzing and visualizing your data, you can prepare the fields (columns) in your dataset for analysis. You can edit field names and descriptions, change the data type for fields, set up drill-down hierarchies for fields, and more.

Use the following topics to prepare fields in your dataset.

**Topics**
+ [

# Editing field names and descriptions
](changing-a-field-name.md)
+ [

# Setting fields as a dimensions or measures
](setting-dimension-or-measure.md)
+ [

# Changing a field data type
](changing-a-field-data-type.md)
+ [

# Adding drill-downs to visual data in Quick Sight
](adding-drill-downs.md)
+ [

# Selecting fields
](selecting-fields.md)
+ [

# Organizing fields into folders in Amazon QuickSight
](organizing-fields-folder.md)
+ [

# Mapping and joining fields
](mapping-and-joining-fields.md)

# Editing field names and descriptions


You can change any field name and description from what is provided by the data source. If you change the name of a field used in a calculated field, make sure also to change it in the calculated field function. Otherwise, the function fails.

**To change a field name or description**

1. In the **Fields** pane of the data prep page, choose the three-dot icon on the field that you want to change. Then choose **Edit name & description**.

1. Enter the new name or description that you want to change, and choose **Apply**.

You can also change the name and description of a field on the data prep page. To do this, select the column header of the field that you want to change in the **Dataset** table in that page's lower half. Then make any changes there.

# Setting fields as a dimensions or measures


In the **Field list** pane, dimension fields have blue icons and measure fields have green icons. *Dimensions* are text or date fields that can be items, like products, or attributes that are related to measures. You can use dimensions to partition these items or attributes, like sales date for sales figures. *Measures* are numeric values that you use for measurement, comparison, and aggregation. 

In some cases, Quick Sight interprets a field as a measure that you want to use it as a dimension (or the other way around). If so, you can change the setting for that field.

Changing a field's measure or dimension setting changes it for all visuals in the analysis that use that dataset. However, it doesn't change it in the dataset itself.

## Changing a field's dimension or measure setting


Use the following procedure to change a field's dimension or measure setting

**To change a field's dimension or measure setting**

1. In the **Field list** pane, hover over the field that you want to change.

1. Choose the selector icon to the right of the field name, and then choose **Convert to dimension** or **Convert to measure** as appropriate.

# Changing a field data type


When Quick Sight retrieves data, it assigns each field a data type based on the data in the field. The possible data types are as follows:
+ Date – The date data type is used for date data in a supported format. For information about the date formats Quick Sight supports, see [Data source quotas](data-source-limits.md).
+ Decimal – The decimal data type is used for numeric data that requires one or more decimal places of precision, for example 18.23. The decimal data type supports values with up to four decimal places to the right of the decimal point. Values that have a higher scale than this are truncated to the fourth decimal place in two cases. One is when these values are displayed in data preparation or analyses, and one is when these values are imported into Quick Sight. For example, 13.00049 is truncated to 13.0004.
+ Geospatial – The geospatial data type is used for geospatial data, for example longitude and latitude, or cities and countries.
+ Integer – The int data type is used for numeric data that only contains integers, for example 39.
+ String – The string data type is used for nondate alphanumeric data.

Quick Sight reads a small sample of rows in the column to determine the data type. The data type that occurs most in the small sample size is the suggested type. In some cases, there might be blank values (treated as strings by Quick Sight) in a column that contains mostly numbers. In these cases, it might be that the String data type is the most frequent type in the sample set of rows. You can manually modify the data type of the column to make it integer. Use the following procedures to learn how.

## Changing a field data type during data prep
Changing type during data prep

During data preparation, you can change the data type of any field from the data source. On the **Change data type** menu, you can change calculated fields that don't include aggregations to geospatial types. You can make other changes to the data type of a calculated field by modifying its expression directly. Quick Sight converts the field data according to the data type that you choose. Rows that contain data that is incompatible with that data type are skipped. For example, suppose that you convert the following field from String to Integer.

```
10020
36803
14267a
98457
78216b
```

All records containing alphabetic characters in that field are skipped, as shown following.

```
10020
36803
98457
```

If you have a database dataset with fields whose data types aren't supported by Quick Sight, use a SQL query during data preparation. Then use `CAST` or `CONVERT` commands (depending on what is supported by the source database) to change the field data types. For more information about adding a SQL query during data preparation, see [Using SQL to customize data](adding-a-SQL-query.md). For more information about how different source data types are interpreted by Quick Sight, see [Supported data types from external data sources](supported-data-types-and-values.md#supported-data-types).

You might have numeric fields that act as dimensions rather than metrics, for example ZIP codes and most ID numbers. In these cases, it's helpful to give them a string data type during data preparation. Doing this lets Quick Sight understand that they are not useful for performing mathematical calculations and can only be aggregated with the `Count` function. For more information about how Quick Sight uses dimensions and measures, see [Setting fields as a dimensions or measures](setting-dimension-or-measure.md).

In [SPICE](spice.md), numbers converted from numeric into an integer are truncated by default. If you want to round your numbers instead, you can create a calculated field using the [`round`](round-function.md) function. To see whether numbers are rounded or truncated before they are ingested into SPICE, check your database engine.

**To change a field data type during data prep**

1. From the Quick Sight homepage, choose **Data** at left. In the **Data** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. In the data preview pane, choose the data type icon under the field you want to change.

1. Choose the target data type. Only data types other than the one currently in use are listed.

## Changing a field data type in an analysis
Changing type in an analysis

You can use the **Field list** pane, visual field wells, or on-visual editors to change numeric field data types within the context of an analysis. Numeric fields default to displaying as numbers, but you can choose to have them display as currency or as a percentage instead. You can't change the data types for string or date fields.

Changing a field's data type in an analysis changes it for all visuals in the analysis that use that dataset. However, it doesn't change it in the dataset itself.

**Note**  
If you are working in a pivot table visual, applying a table calculation changes the data type of the cell values in some cases. This type of change occurs if the data type doesn't make sense with the applied calculation.   
For example, suppose that you apply the `Rank` function to a numeric field that you modified to use a currency data type. In this case, the cell values display as numbers rather than currency. Similarly, if you apply the `Percent difference` function instead, the cell values display as percentages rather than currency. 

**To change a field's data type**

1. Choose one of the following options:
   + In the **Field list** pane, hover over the numeric field that you want to change. Then choose the selector icon to the right of the field name.
   + On any visual that contains an on-visual editor associated with the numeric field that you want to change, choose that on-visual editor.
   + Expand the **Field wells** pane, and then choose the field well associated with the numeric field that you want to change.

1. Choose **Show as**, and then choose **Number**, **Currency**, or **Percent**.

# Adding drill-downs to visual data in Quick Sight
Adding drill-downs

All visual types except pivot tables offer the ability to create a hierarchy of fields for a visual element. The hierarchy lets you drill down to see data at different levels of the hierarchy. For example, you can associate the country, state, and city fields with the x-axis on a bar chart. Then, you can drill down or up to see data at each of those levels. As you drill down each level, the data displayed is refined by the value in the field you drill down on. For example, if you drill down on the state of California, you see data on all of the cities in California.

The field wells you can use to create drill-downs varies by visual type. Refer to the topic on each visual type to learn more about its drill-down support. 

Drill-down functionality is added automatically for dates when you associate a date field with the drill-down field well of a visual. In this case, you can always drill up and down through the levels of date granularity. Drill-down functionality is also added automatically for geospatial groupings, after you define these in the dataset.

Use the following table to identify the field wells/on-visual editors that support drill-down for each visual type.


****  

| Visual type | Field well or on-visual editor | 
| --- | --- | 
| Bar charts (all horizontal) | Y axis and Group/Color | 
| Bar charts (all vertical) | X axis and Group/Color | 
| Combo charts (all) | X axis and Group/Color | 
| Geospatial charts | Geospatial and Color | 
| Heat map | Rows and Columns | 
| KPIs | Trend Group | 
| Line charts (all) | X axis and Color | 
| Pie chart | Group/Color | 
| Scatter plot | Group/Color | 
| Tree map | Group by | 

**Important**  
Drill-downs are not suppoted for tables or pivot tables.

## Adding a drill-down


Use the following procedure to add drill-down levels to a visual.

**To add drill-down levels to a visual**

1. On the analysis page, choose the visual that you want to add drill-downs to.

1. Drag a field item into a **Field well**.

1. If your dataset has a defined hierarchy, you can drag the entire hierarchy into the field well as one. An example is geospatial or coordinate data. In this case, you don't need to follow the remaining steps.

   If you don't have a predefined hierarchy, you can create one in your analysis, as described in the remaining steps.

1. Drag a field that you want to use in the drill-down hierarchy to an appropriate field well, depending on the visual type. Make sure that the label for the dragged field says **Add drill-down layer**. Position the dragged field above or below the existing field based on where you want it to be in the hierarchy you're creating. 

1. Continue until you have added all of the levels of hierarchy that you want. To remove a field from the hierarchy, choose the field, and then choose **Remove**.

1. To drill down or up to see data at a different level of the hierarchy, choose an element on the visual (like a line or bar), and then choose **Drill down to <lower level>** or **Drill up to <higher level>**. In this example, from the `car-make` level you can drill down to `car-model` to see data at that level. If you drill down to `car-model` from the **Ford** `car-make`, you see only `car-model`s in that car-make.

   After you drill down to the `car-model` level, you can then drill down further to see `make-year` data, or go back up to `car-make`. If you drill down to `make-year` from the bar representing **Ranger**, you see only years for that model of car.

# Selecting fields


When you prepare data, you can select one or more fields to perform an action on them, such as excluding them or adding them to a folder.

To select one or more fields in the data preparation pane, click or tap the field or fields in the **Fields** pane at left. You can then choose the field menu (the three dots) to the right of the field name and choose an action to take. The action is performed on all selected fields.

You can select or deselect all fields at once by choosing either **All** or **None** at the top of the **Fields** pane.

If you edit a dataset and exclude a field that is used in a visual, that visual breaks. You can fix it the next time you open that analysis.

## Searching for fields


If you have a long field list in the **Fields** pane, you can search to locate a specific field by entering a search term for **Search fields**. Any field whose name contains the search term is shown. 

Search is case-insensitive and wildcards are not supported. Choose the cancel icon (**X**) to the right of the search box to return to viewing all fields.

# Organizing fields into folders in Amazon QuickSight
Organizing fields into folders

When prepping your data in Quick Sight, you can use folders to organize your fields for multiple authors across your enterprise. Arranging fields into folders and subfolders can make it easier for authors to find and understand fields in your dataset.

You can create folders while preparing your dataset, or when editing a dataset. For more information about creating a new dataset and preparing it, see [Creating datasets](creating-data-sets.md). For more information about opening an existing dataset for data preparation, see [Editing datasets](edit-a-data-set.md).

While performing an analysis, authors can expand and collapse folders, search for specific fields within folders, and see your descriptions of folders on the folder menu. Folders appear at the top of the **Fields** pane in alphabetical order.

## Creating a folder


Use the following procedure to create a new folder in the **Fields** pane.

**To create a new folder**

1. On the data preparation page, in the **Fields** pane, select the three-dot icon, and choose **Add to folder**. 

   To select more than one field at a time, press the Ctrl key while you select (Command key on Mac).

1. On the **Add to folder** page that appears, choose **Create a new folder** and enter a name for the new folder.

1. Choose **Apply**.

The folder appears at the top of the **Fields** pane with the fields that you chose inside it. Fields inside folders are arranged in alphabetical order.

## Creating a subfolder


To further organize your data fields in the **Fields** pane, you can create subfolders within parent folders. 

**To create a subfolder**

1. On the data preparation page, in the **Fields** pane, select the field menu for a field already in a folder and choose **Move to folder**.

1. On the **Move to folder** page that appears, choose **Create a new folder** and enter a name for the new folder.

1. Choose **Apply**.

The subfolder appears within the parent folder at the top of the list of fields. Subfolders are arranged in alphabetical order.

## Adding fields to an existing Folder


Use the following procedure to add fields to an existing folder in the **Fields** pane.

**To add one or more fields to a folder**

1. On the data preparation page, in the **Fields** pane, select the fields that you want to add to a folder. 

   To select more than one field at a time, press the Ctrl key while you select (Command key on Mac).

1. On the field menu, choose **Add to folder**.

1. On the **Add to folder** page that appears, choose a folder for **Existing folder**.

1. Choose **Apply**.

The field or fields are added to the folder.

## Moving fields between folders


Use the following procedure to move fields between folders in the **Fields** pane.

**To move fields between folders**

1. On the data preparation page, in the **Fields** pane, select the fields that you want to move to another folder. 

   To select more than one field at a time, press the Ctrl key while you select (Command key on Mac).

1. On the field menu, choose **Move to folder**.

1. On the **Move to folder** page that appears, choose a folder for **Existing folder**.

1. Choose **Apply**.

## Removing fields from a folder


Use the following procedure to remove fields from a folder in the **Fields** pane. Removing a field from a folder doesn't delete the field.

**To remove fields from a folder**

1. On the data preparation page, in the **Fields** pane, select the fields that you want to remove.

1. On the field menu, choose **Remove from folder**.

The fields that you selected are removed from the folder and placed back in the list of fields in alphabetical order.

## Editing a folder name and adding a folder description


You can edit the name or add a description of a folder to provide context about the data fields inside it. The folder name appears in the **Fields** pane. While performing an analysis, authors can read your folder's description when they select the folder menu in the **Fields** pane.

**To edit a folder name or edit or add a description for a folder**

1. On the data preparation page, in the **Fields** pane, select the folder menu for the folder that you want to edit and choose **Edit name & description**.

1. On the **Edit folder** page that appears, do the following:
   + For **Name**, enter a name for the folder.
   + For **Description**, enter a description of the folder.

1. Choose **Apply**.

## Moving folders


You can move folders and subfolders to new or existing folders in the **Fields** pane. 

**To move a folder**

1. On the data preparation page, in the **Fields** pane, choose **Move folder** on the folder menu.

1. On the **Move folder** page that appears, do one of the following: 
   + Choose **Create a new folder** and enter a name for the folder.
   + For **Existing folder, **choose a folder.

1. Choose **Apply**.

The folder appears within the folder that you chose in the **Fields** pane.

## Removing folders from the fields pane


Use the following procedure to remove a folder from the **Fields** pane.

**To remove a folder**

1. On the data preparation page, in the **Fields** pane, choose **Remove folder** on the folder menu.

1. On the **Remove folder?** page that appears, choose **Remove**.

The folder is removed from the **Fields** pane. Any fields that were in the folder are placed back in the list of fields in alphabetical order. Removing folders doesn't exclude fields from view or delete fields from the dataset.

# Mapping and joining fields


When you are using different datasets together in Quick Sight, you can simplify the process of mapping fields or joining tables during the data preparation stage. You should already be verifying that your fields have the correct data type and an appropriate field name. However, if you already know which datasets are going to be used together, you can take a couple of extra steps to make your work easier later on. 

## Mapping fields


Quick Sight can automatically map fields between datasets in the same analysis. The following tips can help make it easier for Quick Sight to automatically map fields between datasets, for example if you are creating a filter action across datasets:
+ Matching field names – Field names must match exactly, with no differences in case, spacing, or punctuation. You can rename fields that describe the same data, so an automatic mapping is accurate.
+ Matching data types – Fields must have the same data type for automatic mapping. You can change the data types while you are preparing the data. This step also gives you the opportunity to discover whether you need to filter out any data that isn't the correct data type.
+ Using calculated fields – You can use calculated fields to create a matching field, and give it the correct name and data type for automatic mapping.

**Note**  
After an automatic mapping exists, you can rename a field without breaking the field mapping. However, if you change the data type, the mapping is broken.

For more information on field mapping for filter actions across datasets, see [Creating and editing custom actions in Amazon Quick Sight](custom-actions.md).

## Joining fields


You can create joins between data from different data sources, including files or databases. The following tips can help make it easier for you to join data from different files or data sources:
+ Similar field names – It is simpler to join fields when you can see what should match; for example, **Order ID** and **order-id** seem as if they should be the same. But if one is a work order, and the other is a purchase order, then the fields are probably different data. If possible, make sure that the files and tables that you want to join have field names making it clear what data they contain. 
+ Matching data types – Fields must have the same data type before you can join on them. Make sure that the files and tables that you want to join having matching data types in join fields. You can't use a calculated field for a join. Also, you can't join two existing datasets. You create the joined dataset by directly accessing the source data.

For more information on joining data across data sources, see [Joining data](joining-data.md).

# Filtering data in Amazon Quick Sight
Filtering data

You can use filters to refine the data in a dataset or an analysis. For example, you can create a filter on a region field that excludes data from a particular region in a dataset. You can also add a filter to an analysis, such as a filter on the range of dates that you want to include in any visuals in your analysis.

When you create a filter in a dataset, that filter applies to the entire dataset. Any analyses and subsequent dashboards created from that dataset contains the filter. If someone creates a dataset from your dataset, the filter also is in the new dataset.

When you create a filter in an analysis, that filter only applies to that analysis and any dashboards you publish from it. If someone duplicates your analysis, the filter persists in the new analysis. In analyses, you can scope filters to a single visual, some visuals, all visuals that use this dataset, or all applicable visuals.

Also, when you create filters in an analysis, you can add a filter control to your dashboard. For more information about filter controls, see [Adding filter controls to analysis sheets](filter-controls.md).

Each filter you create applies only to a single field. You can apply filters to both regular and calculated fields.

There are several types of filters you can add to datasets and analyses. For more information about the types of filters you can add, and some of their options, see [Filter types in Amazon Quick](filtering-types.md).

If you create multiple filters, all top-level filters apply together using AND. If you group filters by adding them inside a top-level filter, the filters in the group apply using OR.

Amazon Quick Sight applies all of the enabled filters to the field. For example, suppose that there is one filter of `state = WA` and another filter of `sales >= 500`. Then the dataset or analysis only contains records that meet both of those criteria. If you disable one of these, only one filter applies.

Take care that multiple filters applied to the same field aren't mutually exclusive.

Use the following sections to learn how to view, add, edit, and delete filters.

**Topics**
+ [

# Viewing existing filters
](viewing-filters-data-prep.md)
+ [

# Adding filters
](add-a-filter-data-prep.md)
+ [

# Cross-sheet filters and controls
](cross-sheet-filters.md)
+ [

# Filter types in Amazon Quick
](filtering-types.md)
+ [

# Adding filter controls to analysis sheets
](filter-controls.md)
+ [

# Editing filters
](edit-a-filter-data-prep.md)
+ [

# Enabling or disabling filters
](disable-a-filter-data-prep.md)
+ [

# Deleting filters
](delete-a-filter-data-prep.md)

# Viewing existing filters


When you edit a dataset or open an analysis, you can view any existing filters that were created. Use the following procedures to learn how.

## Viewing filters in datasets


1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Data** at left.

1. In the **Datasets** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. On the data preparation page that opens, choose **Filters** at lower left to expand the **Filters** section.

   Any filters that are applied to the dataset appear here. If a single field has multiple filters, they are grouped together. They display in order of create date, with the oldest filter on top.

## Viewing filters in analyses


Use the following procedure to view filters in analyses.

**To view a filter in an analysis**

1. From the Quick homepage, choose **Analyses**.

1. On the **Analyses** page, choose the analysis that you want to work with.

1. In the analysis, choose the **Filter** icon to open the **Filters** pane.

   Any filters applied to the analysis appear here.

   The way that a filter is scoped is listed at the bottom of each filter. For more information about scoping filters, see [Adding filters](add-a-filter-data-prep.md).

# Adding filters


You can add filters to a dataset or an analysis. Use the following procedures to learn how.

## Adding filters to datasets


Use the following procedure to add filters to datasets.

**To add a filter to a dataset**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Data** at left.

1. In the **Datasets** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. On the data preparation page that opens, choose **Add filter** at lower left, and then choose a field that you want to filter.

   The filter is added to the **Filters** pane.

1. Choose the new filter in the pane to configure the filter. Or you can choose the three dots to the right of the new filter and choose **Edit**.

   Depending on the data type of the field, your options for configuring the filter vary. For more information about the types of filters that you can create and their configurations, see [Filter types in Amazon Quick](filtering-types.md).

1. When finished, choose **Apply**.
**Note**  
The data preview shows you the results of your combined filters only as they apply to the first 1,000 rows. If all of the first 1,000 rows are filtered out, then no rows show in the preview. This effect occurs even when rows after the first 1,000 aren't filtered out.

## Adding filters in analyses


Use the following procedure to add filters to analyses.

**To add a filter to an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Analyses**.

1. On the **Analyses** page, choose the analysis that you want to work with.

1. In the analysis, choose the **Filter** icon to open the **Filters** pane, and then choose **ADD**.

1. Choose the new filter in the pane to configure it. Or you can choose the three dots to the right of the new filter and choose **Edit**.

1. In the **Edit filter** pane that opens, for **Applied to**, choose one of the following options.
   + **Single visual** – The filter applies to the selected item only.
   + **Single sheet** – The filter applies to a single sheet.
   + **Cross sheet** – The filter applies to multiple sheets in the dataset.

   Depending on the data type of the field, your remaining options for configuring the filter vary. For more information about the types of filters you can create and their configurations, see [Filter types in Amazon Quick](filtering-types.md).

# Cross-sheet filters and controls


Cross-sheet filters and controls are filters that are scoped to either your entire analysis or dashboard or multiple sheets within your analysis and dashboard.

## Filters


**Creating a Cross-Sheet Filter**

1. Once you have [Added a filter](https://docs.amazonaws.cn/quicksight/latest/user/add-a-filter-data-prep.html#add-a-filter-data-prep-analyses), you update the scope of the filter to cross-sheet. By default, this applies to all of the the sheets in your analysis.

1. If the **Apply cross-datasets** box is checked, then the filter will be applied to all visuals from up to 100 different datasets that are applicable to all sheets in the filter scope.

1. If you want to customize the sheets that it is applied to, then choose the Cross-sheet icon. You can then view the sheets the filter is currently applied to or toggle on the custom select sheets.

1. When you enable **Custom select sheets**, you can select which sheets to apply the filter to.

1. Follow the steps at [Editing filters in analyses](https://docs.amazonaws.cn/quicksight/latest/user/edit-a-filter-data-prep.html#edit-a-filter-data-prep-analyses). Your changes will be applied to all of the filters for all of the sheets you have selected. This includes newly added sheets if the filter is scoped to your entire analysis.

**Removing a Cross-Sheet Filter**

**Deleting**

If you have no controls created from these filters, see [Deleting filters in analyses](https://docs.amazonaws.cn/quicksight/latest/user/delete-a-filter-data-prep.html#delete-a-filter-data-prep-analyses).

If you have controls created then:

****

1. Follow the instructions at [Deleting filters in analyses](https://docs.amazonaws.cn/quicksight/latest/user/delete-a-filter-data-prep.html#delete-a-filter-data-prep-analyses).

1. If you choose **Delete Filter and Controls**, the controls will be deleted from all pages. This may impact the layout of your analysis. Alternatively, you can remove these controls individually. 

**Downscoping**

If you want to remove a cross-sheet filter, you can also do this by changing the filter scope:

****

1. Follow the instructions at [Editing filters in analyses](https://docs.aws.amazon.com/quicksight/latest/user/edit-a-filter-data-prep.html#edit-a-filter-data-prep-analyses) to get to the filter. 

1. One of the edits you can make is changing the scope. You can switch to **Single sheet** or **Single visual**. You can also remove a sheet from the Cross-sheet selection.

   Or the custom sheet selection:  
![\[This is an image of Delete Filter in Quick Sight.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/cross-sheet-7.png)

1. If there are controls, you will see a modal to warn you that you will be bulk-removing controls from any of the sheets where the filter no longer applies and this can impact your layout. You can also remove the controls individually. For more information, see [Removing a Cross-Sheet Control](#cross-sheet-removing-control).

1. If you add controls to the **Top of all sheets in filter scope** then new sheets will by default be added with this new control if the filter is scoped to your entire analysis.

## Controls


### Creating a Cross-Sheet Control


**New filter control**

1. Create a cross-sheet filter. For more information, see [Filters](#filters).

1. From the three-dot menu, you can see an option that says **Add control**. Hovering over this, you will see three options:
   + **Top of all sheets in filter scope**
   + **Top of this sheet**
   + **Inside this sheet**

   If you want to add to multiple-sheets within the sheets themselves, you can do that sheet-by-sheet. Or you can add to the top and then use the option on each control to **Move to sheet**. For more information, see [Editing a Cross-Sheet Control](#cross-sheet-controls-editing-control).

**Increasing Scope of Existing Control**

1. Navigate to the existing filter in the analysis

1. Change the scope of what sheets this filter is **Applied to** to **Cross-sheet**.

1. If there is already a control created from the filter, you will see a modal, which if you check the box will bulk-add controls to the top of all the sheets in the filter scope. This will not impact the position of the already created control if it is on the sheet.

### Editing a Cross-Sheet Control


1. Go to the cross-sheet control and select the three-dot menu if the control is pinned to the top or the edit pencil icon if the control is on the sheet. You will be presented with the following options:
   + **Go to filter** (which directs you to the cross-sheet filter for you to edit or review
   + **Move to sheet** (which moves the control into the analysis pane)
   + **Reset** 
   + **Refresh** 
   + **Edit** 
   + **Remove** 

1. Choose **Edit**. This brings up the **Format Control** pane on the right side of your analysis.

1. You can then edit your control. The top section labeled **Cross-sheet settings** will apply to all controls, whereas any settings outside of this section are not applicable to all controls and only to the specific control you’re editing. For instance, **Relevant value** is not a cross-sheet control setting. 

1. You can also see the sheets that this control is on as well as the location (Top or Sheet) that the control is on for each sheet. You can do this by choosing **Sheets(8)**.

### Removing a Cross-Sheet Control


You can remove controls in two places. First, from the control:

1. Go to the cross-sheet control and select the three-dot menu if the control is pinned to the top or the edit pencil icon if the control is on the sheet. You will be presented with the following options:
   + **Go to filter** (which directs you to the cross-sheet filter for you to edit or review
   + **Move to sheet** (which moves the control into the analysis pane)
   + **Reset** 
   + **Refresh** 
   + **Edit** 
   + **Remove** 

1. Choose **Remove**

Second, you can remove controls from the filter:

1. Choose the three-dot menu on the cross-sheet filter that the cross-sheet controls are created from. You will see that instead of an option to **Add control** there is now an option to **Manage control**.

1. Hover over **Manage control**. You will be presented with the following options:
   + **Move inside this sheet** 
   + **Top of this sheet**

   These options are applicable to just the control on the sheet, depending on where the current control is. If you don’t have controls on all of the sheets within the filter scope, you will get the option to **Add to top of all sheets in filter scope**. This will not move sheet controls to the top of the sheet if you have already added them to the sheet in the analysis. You will also get the option to **Remove from this sheet** or **Remove from all sheets**.

# Filter types in Amazon Quick
Filter types

You can create several different types of filters in Quick. The type of filter you create mostly depends on the data type of the field that you want to filter.

In datasets, you can create the following types of filters:
+ Text filters
+ Numeric filters
+ Date filters

In analyses, you can create the same types of filters as you can in datasets. You can also create:
+ Group filters with and/or operators
+ Cascading filters
+ Nested filters

Use the following sections to learn more about each type of filter you can create and some of their options.

**Topics**
+ [

# Adding text filters
](add-a-text-filter-data-prep.md)
+ [

# Adding nested filters
](add-a-nested-filter-data-prep.md)
+ [

# Adding numeric filters
](add-a-numeric-filter-data-prep.md)
+ [

# Adding date filters
](add-a-date-filter2.md)
+ [

# Adding filter conditions (group filters) with AND and OR operators
](add-a-compound-filter.md)
+ [

# Creating cascading filters
](use-a-cascading-filter.md)

# Adding text filters
Text filters

When you add a filter using a text field, you can create the following types of text filters:
+ **Filter list** (Analyses only) – This option creates a filter that you can use to select one or more field values to include or exclude from all the available values in the field. For more information about creating this type of text filter, see [Filtering text field values by a list (analyses only)](#text-filter-list).
+ **Custom filter list** – With this option, you can enter one or more field values to filter on, and whether you want to include or exclude records that contain those values. The values that you enter must match the actual field values exactly for the filter to be applied to a given record. For more information about creating this type of text filter, see [Filtering text field values by a custom list](#add-text-custom-filter-list-data-prep).
+ **Custom filter** – With this option, you enter a single value that the field value must match in some way. You can specify that the field value must equal, not equal, starts with, ends with, contains, or does not contain the value you specify. If you choose an equal comparison, the specified value and actual field value must match exactly in order for the filter to be applied to a given record. For more information about creating this type of text filter, see [Filtering a single text field value](#add-text-filter-custom-list-data-prep).
+ **Top and bottom filter** (Analyses only) – You can use this option to show the top or bottom *n* value of one field ranked by the values in another field. For example, you might show the top five salespeople based on revenue. You can also use a parameter to allow dashboard users to dynamically choose how many top or bottom ranking values to show. For more information about creating top and bottom filters, see [Filtering a text field by a top or bottom value (analyses only)](#add-text-filter-top-and-bottom).

## Filtering text field values by a list (analyses only)
Filter list

In analyses, you can filter a text field by selecting values to include or exclude from a list of all value in the field.

**To filter a text field by including and excluding values**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Filter list**.

1. For **Filter condition**, choose **Include** or **Exclude**.

1. Choose the field values that you want to filter on. To do this, select the check box in front of each value.

   If there are too many values to choose from, enter a search term into the box above the checklist and choose **Search**. Search terms are case-insensitive and wildcards aren't supported. Any field value that contains the search term is returned. For example, searching on L returns al, AL, la, and LA.

   The values display alphabetically in the control, unless there are more than 1,000 distinct values. Then the control displays a search box instead. Each time that you search for the value that you want to use, it starts a new query. If the results contain more than 1,000 values, you can scroll through the values with pagination.

1. When finished, choose **Apply**.

## Filtering text field values by a custom list
Custom filter list

You can specify one or more field values to filter on, and whether you want to include or exclude records that contain those values. The specified value and actual field value must match exactly for the filter to be applied to a given record.

**To filter text field values by a custom list**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Custom filter list**.

1. For **Filter condition**, choose **Include** or **Exclude**.

1. For **List**, enter a value in the text box. The value must match an existing field value exactly.

1. (Optional) To add additional values, enter them in the text box, one per line.

1. For **Null options** choose **Exclude nulls**, **Include nulls**, or **Nulls only**.

1. When finished, choose **Apply**.

## Filtering a single text field value
Custom filter

With the **Custom filter** filter type, you specify a single value that the field value must equal or not equal, or must match partially. If you choose an equal comparison, the specified value and actual field value must match exactly for the filter to be applied to a given record.

**To filter a text field by a single value**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Custom filter**.

1. For **Filter condition**, choose one of the following:
   + **Equals** – When you choose this option, the values included or excluded in the field must match the value that you enter exactly.
   + **Does not equal** – When you choose this option, the values included or excluded in the field must match the value that you enter exactly.
   + **Starts with** – When you choose this option, the values included or excluded in the field must start with the value that you enter.
   + **Ends with** – When you choose this option, the values included or excluded in the field must start with the value that you enter.
   + **Contains** – When you choose this option, the values included or excluded in the field must contain the whole value that you enter.
   + **Does not contain** – When you choose this option, the values included or excluded in the field must not contain any part of the value that you enter.
**Note**  
Comparison types are case-sensitive.

1. Do one of the following:
   + For **Value**, enter a literal value.
   + Select **Use parameters** to use an existing parameter, and then choose a parameter from the list.

     For parameters to appear in this list, create your parameters first. Usually, you create a parameter, add a control for it, and then add a filter for it. For more information, see [Parameters in Amazon Quick](parameters-in-quicksight.md).

     The values display alphabetically in the control, unless there are more than 1,000 distinct values. Then the control displays a search box instead. Each time that you search for the value that you want to use, it starts a new query. If the results contain more than 1,000 values, you can scroll through the values with pagination.

1. For **Null options** choose **Exclude nulls**, **Include nulls**, or **Nulls only**.

1. When finished, choose **Apply**.

## Filtering a text field by a top or bottom value (analyses only)
Top and bottom

You can use a **Top and bottom filter** to show the top or bottom *n* value of one field ranked by the values in another field. For example, you might show the top five salespeople based on revenue. You can also use a parameter to allow dashboard users to dynamically choose how many top or bottom ranking values to show.

**To create a top and bottom text filter**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Top and bottom filter**.

1. Choose **Top** or **Bottom**.

1. For **Show top** integer (or **Show bottom** integer), do one of the following:
   + Enter the number of top or bottom items to show.
   + To use a parameter for the number of top or bottom items to show, select **Use parameters**. Then choose an existing integer parameter. 

     For example, let's say that you want to show the top three salespersons by default. However, you want the dashboard viewer to be able to choose whether to show 1–10 top salespersons. In this case, take the following actions:
     + Create an integer parameter with a default value. 
     + To link the number of displayed items to a parameter control, create a control for the integer parameter. Then you make the control a slider with a step size of 1, a minimum value of 1, and a maximum value of 10. 
     + To make the control work, link it to a filter by creating a top and bottom filter on `Salesperson` by `Weighted Revenue`, enable **Use parameters**, and choose your integer parameter. 

1. For **By**, choose a field to base the ranking on. If you want to show the top five salespeople per revenue, choose the revenue field. You can also set the aggregate that you want to perform on the field.

1. (Optional) Choose **Tie breaker** and then choose another field to add one or more aggregations as tie breakers. This is useful, in the case of this example, when there are more than five results returned for the top five salespeople per revenue. This situation can happen if multiple salespeople have the same revenue amount. 

   To remove a tie breaker, use the delete icon.

1. When finished, choose **Apply**.

# Adding nested filters
Nested filters

Nested filters are advanced filters that can be added to a Quick analysis. A Nested filter filters a field using a subset of data defined by another field in that same dataset. This allows authors to show additional contextual data without the need to filter data out if the data point doesn't meet an initial condition.

Nested filters function similarly to a correlated subquery in SQL or a market basket analysis. For example, say you want to perform a market basket analysis on your sales data. You can use nested filters to find the sales quantity by product for customers who have or have not purchased a specific product. You can also use nested filters to identify groups of customers that did not purchase a selected product or who only purchased a specific list of products.

Nested filters can only be added at the analysis level. You can't add a nested filter to a dataset.

Use the procedure below to add a nested filter to a Quick analysis.

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. Choose **Analyses**, and then choose the analysis that you want to add a nested filter to.

1. Create a new filter on the text field that you want to filter on. For more information about creating a filter, see [Adding filters in analyses](add-a-filter-data-prep.md#add-a-filter-data-prep-analyses).

1. After you create the new filter, locate the new filter in the **Filters** pane. Choose the ellipsis (three dots) next to the new filter, and then choose **Edit filter**. Alternatively, choose the filter entity in the **Filters** pane to open the **Edit filter** pane.

1. The **Edit filter** pane opens. Open the **Filter type** dropdown menu, navigate to the **Avanced filter** section, and then choose **Nested filter**.

1. For **Qualifying condition**, choose **Include** or **Exclude**. The *qualifying condition* allows you to run a not in the set query on the data in your analysis. In our sales example above, the qualifying condition determines if the filter returns a list of customers who did buy the specifc product or a list of customers who did not buy the product.

1. For **Nested field**, choose the text field that you want to filter data with. The nested field cannot be the same as the primary field selected in step 3. Category fields are the only supported field type for the inner filter.

1. For **Nested filter type**, choose the filter type that you want. The filter type that you choose determines the final configuration steps for the nested filter. Available filter types and information about their configuration can be found in the list below.
   + [Filter list](https://docs.amazonaws.cn/quicksuite/latest/userguide/text-filter-list)
   + [Custom filter list](https://docs.amazonaws.cn/quicksuite/latest/userguide/add-text-custom-filter-list-data-prep)
   + [Custom filter](https://docs.amazonaws.cn/quicksuite/latest/userguide/add-text-filter-custom-list-data-prep)

# Adding numeric filters
Numeric filters

Fields with decimal or int data types are considered numeric fields. You create filters on numeric fields by specifying a comparison type, for example **Greater than** or **Between**, and a comparison value or values as appropriate to the comparison type. Comparison values must be positive integers and can't contain commas.

You can use the following comparison types in numeric filters:
+ Equals
+ Does not equal
+ Greater than
+ Greater than or equal to
+ Less than
+ Less than or equal to
+ Between

**Note**  
To use a top and bottom filter for numeric data (analyses only), first change the field from a measure to a dimension. Doing this converts the data to text. Then you can use a text filter. For more information, see [Adding text filters](add-a-text-filter-data-prep.md).

In analyses, for datasets based on database queries, you can also optionally apply an aggregate function to the comparison value or values, for example **Sum** or **Average**. 

You can use the following aggregate functions in numeric filters:
+ Average
+ Count
+ Count distinct
+ Max
+ Median
+ Min
+ Percentile
+ Standard deviation
+ Standard deviation - population
+ Sum
+ Variance
+ Variance - population

## Creating numeric filters


Use the following procedure to create a numeric field filter.

**To create a numeric field filter**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. (Optional) For **Aggregation**, choose an aggregation. No aggregation is applied by default. This option is available only when creating numeric filters in an analysis.

1. For **Filter condition**, choose a comparison type.

1. Do one of the following:
   + If you chose a comparison type other than **Between**, enter a comparison value.

     If you chose a comparison type of **Between**, enter the beginning of the value range in **Minimum value** and the end of the value range in **Maximum value**.
   + (Analyses only) To use an existing parameter, enable **Use parameters**, then choose your parameter from the list.

     For parameters to appear in this list, create your parameters first. Usually, you create a parameter, add a control for it, and then add a filter for it. For more information, see [Parameters in Amazon Quick](parameters-in-quicksight.md). The values display alphabetically in the control, unless there are more than 1,000 distinct values. Then the control displays a search box instead. Each time you search for the value that you want to use, it initiates a new query. If the results contain more than 1,000 values, you can scroll through the values with pagination. 

1. (Analyses only) For **Null options** choose **Exclude nulls**, **Include nulls**, or **Nulls only**.

1. When finished, choose **Apply**.

# Adding date filters
Date filters

You create filters on date fields by selecting the filter conditions and date values that you want to use. There are three filter types for dates:
+ **Range** – A series of dates based on a time range and comparison type. You can filter records based on whether the date field value is before or after a specified date, or within a date range. You enter date values in the format MM/DD/YYYY. You can use the following comparison types:
  + **Between** – Between a start date and an end date
  + **After** – After a specified date
  + **Before** – Before a specified date
  + **Equals** – On a specified date

  For each comparison type, you can alternatively choose a rolling date relative to a period or dataset value.
+ **Relative** (analyses only) – A series of date and time elements based on the current date. You can filter records based on the current date and your selected unit of measure (UOM). Date filter units include years, quarters, months, weeks, days, hours, and minutes. You can exclude current period, add support for Next N filters similar to Last N with an added capability to allow for Anchor date. You can use the following comparison types:
  + **Previous** – The previous UOM—for example, the previous year.
  + **This** – This UOM, which includes all dates and times that fall within the select UOM, even if they occur in the future.
  + **To date *or* up to now** – UOM to date, or UOM up to now. The displayed phrase adapts to the UOM that you choose. However, in all cases this option filters out data that is not between the beginning of the current UOM and the current moment.
  + **Last *n*** – The last specified number of the given UOM, which includes all of this UOM and all of the last *n * −1 UOM. For example, let's say today is May 10, 2017. You choose to use *years* as your UOM, and set Last *n *years to 3. The filtered data includes data for all of 2017, plus all of 2016, and all of 2015. If you have any data for the future dates of the current year (2017 in this example), these records are included in your dataset.
+ **Top and bottom** (analyses only) – A number of date entries ranked by another field. You can show the top or bottom *n* for the type of date or time UOM you choose, based on values in another field. For example, you can choose to show the top 5 sales days based on revenue.

Comparisons are applied inclusive to the date specified. For example, if you apply the filter `Before 1/1/16`, the records returned include all rows with date values through 1/1/16 23:59:59. If you don't want to include the date specified, you can clear the option to **Include this date**. If you want to omit a time range, you can use the **Exclude the last N periods** option to specify the number and type of time periods (minutes, days, and so on) to filter out.

You can also choose to include or exclude nulls, or exclusively show rows that contain nulls in this field. If you pass in a null date parameter (one without a default value), it doesn't filter the data until you provide a value.

**Note**  
If a column or attribute has no time zone information, then the client query engine sets the default interpretation of that date-time data. For example, suppose that a column contains a timestamp, rather than a timestamptz, and you are in a different time zone than the data's origin. In this case, the engine can render the timestamp differently than you expect. Amazon Quick and [SPICE](spice.md) both use Universal Coordinated Time (UTC) times. 

Use the following sections to learn how to create date filters in datasets and analyses.

## Creating date filters in datasets


Use the following procedure to create a range filter for a date field in a dataset.

**To create a range filter for a date field in a dataset**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Condition**, choose a comparison type: **Between**, **After**, or **Before**.

   To use **Between** as a comparison, choose **Start date** and **End date** and choose dates from the date picker controls that appear.

   You can choose if you want to include either or both the start and end dates in the range by selecting **Include start date** or **Include end date**.

   To use **Before** or **After** comparisons, enter a date or choose the date field to bring up the date picker control and choose a date instead. You can include this date (the one you chose), to exclude the last N time periods, and specify how to handle nulls. 

1. For **Time granularity**, choose **Day**, **Hour**, **Minute**, or **Second**.

1. When finished, choose **Apply**.

## Creating date filters in analyses


You can create date filters in analyses as described following.

### Creating range date filters in analyses


Use the following procedure to create a range filter for a date field in an analysis.

**To create a range filter for a date field in an analysis**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Date & time range**.

1. For **Condition**, choose a comparison type: **Between**, **After**, **Before**, or **Equals**.

   To use **Between** as a comparison, choose **Start date** and **End date** and choose dates from the date picker controls that appear.

   You can choose to include either or both the start and end dates in the range by selecting **Include start date** or **Include end date**.

   To use a **Before**, **After**, or **Equals** comparison, enter a date or choose the date field to bring up the date picker control and choose a date instead. You can include this date (the one you chose), to exclude the last N time periods, and specify how to handle nulls. 

   To **Set a rolling date** for your comparison, choose **Set a rolling date**.

   In the **Set a rolling date** pane that opens, choose **Relative date** and then select if you want to set the date to **Today**, **Yesterday**, or you can specify the **Filter condition** (start of or end of), **Range** (this, previous, or next), and **Period** (year, quarter, month, week, or day).

1. For **Time granularity**, choose **Day**, **Hour**, **Minute**, or **Second**.

1. (Optional) If you are filtering by using an existing parameter, instead of specific dates, choose **Use parameters**, then choose your parameter or parameters from the list. To use **Before**, **After**, or **Equals** comparisons, choose one date parameter. You can include this date in the range.

   To use **Between**, enter both the start date and end date parameters separately. You can include the start date, the end date, or both in the range. 

   To use parameters in a filter, create them first. Usually, you create a parameter, add a control for it, and then add a filter for it. For more information, see [Parameters in Amazon Quick](parameters-in-quicksight.md).

1. For **Null options** choose **Exclude nulls**, **Include nulls**, or **Nulls only**.

1. When finished, choose **Apply**.

### Creating relative date filters in analyses


Use the following procedure to create a relative filter for a date field in an analysis.

**To create a relative filter for a date field in an analysis**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Relative dates**.

1. For **Time granularity**, choose a granularity of time that you want to filter by (days, hours, minutes).

1. For **Period**, choose a unit of time (years, quarters, quarters, months, weeks, days).

1. For **Range**, choose how you want the filter to relate to the time frame. For example, if you choose to report on months, your options are previous month, this month, month to date, last N months, and next N months.

   If you choose Last N or Next N years, quarters, months, weeks, or days, enter a number for **Number of**. For example, last 3 years, next 5 quarters, last 5 days.

1. For **Null options** choose **Exclude nulls**, **Include nulls**, or **Nulls only**.

1. For **Set dates relative to**, choose one of the following options:
   + **Current date time** – If you choose this option, you can set it to **Exclude last**, and then specify the number and type of time periods.
   + **Date and time from a parameter** – If you choose this option, you can select an existing datetime parameter.

1. (Optional) If you are filtering by using an existing parameter, instead of specific dates, enable **Use parameters**, then choose your parameter or parameters from the list. 

   To use parameters in a filter, create them first. Usually, you create a parameter, add a control for it, and then add a filter for it. For more information, see [Parameters in Amazon Quick](parameters-in-quicksight.md).

1. When finished, choose **Apply**.

### Creating top and bottom date filters in analyses


Use the following procedure to create a top and bottom filter for a date field in an analysis.

**To create a top and bottom filter for a date field in an analysis**

1. Create a new filter using a text field. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. For **Filter type**, choose **Top and bottom**.

1. Select **Top** or **Bottom**.

1. For **Show**, enter the number of top or bottom items you want to show and choose a unit of time (years, quarters, months, weeks days, hours, minutes). 

1. For **By**, choose a field to base the ranking on.

1. (Optional) Add another field as a tie breaker, if the field for **By** has duplicates. Choose **Tie breaker**, and then choose another field. To remove a tie breaker, use the delete icon.

1. (Optional) If you are filtering by using an existing parameter, instead of specific dates, select **Use parameters**, then choose your parameter or parameters from the list.

   To use a parameter for **Top and bottom**, choose an integer parameter for the number of top or bottom items to show. 

   To use parameters in a filter, create them first. Usually, you create a parameter, add a control for it, and then add a filter for it. For more information, see [Parameters in Amazon Quick](parameters-in-quicksight.md).

1. When finished, choose **Apply**.

# Adding filter conditions (group filters) with AND and OR operators
Filter conditions

In analyses, when you add multiple filters to a visual, Quick uses the AND operator to combine them. You can also add filter conditions to a single filter with the OR operator. This is called a compound filter, or filter group.

To add multiple filters using the OR operator, create a filter group. Filter grouping is available for all types of filters in analyses. 

When you filter on multiple measures (green fields marked with \$1), you can apply the filter conditions to an aggregate of that field. Filters in a group can contain either aggregated or nonaggregated fields, but not both. 

**To create a filter group**

1. Create a new filter in an analysis. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the new filter to expand it.

1. In the expanded filter, choose **Add filter condition** at bottom, and then choose a field to filter on. 

1.  Choose the conditions to filter on. 

   The data type of the field that you selected determines the options available here. For example, if you chose a numeric field, you can specify the aggregation, filter condition, and values. If you chose a text field, you can chose the filter type, filter condition, and values. And if you chose a date field, you can specify the filter type, condition, and time granularity. For more information about these options, see [Filter types in Amazon Quick](filtering-types.md).

1.  (Optional) You can add additional filter conditions to the filter group by choosing **Add filter condition** again at bottom.

1.  (Optional) To remove a filter from the filter group, choose the trash-can icon near the field name. 

1. When finished, choose **Apply**.

   The filters appear as a group in the **Filters** pane.

# Creating cascading filters
Cascading filters

The idea behind cascading any action, such as a filter, is that choices in the higher levels of a hierarchy affect the lower levels of a hierarchy. The term *cascading* comes from the way that a cascade waterfall flows from one tier to the next. 

To set up cascading filters, you need a trigger point where the filter is activated, and target points where the filter is applied. In Quick, the trigger and target points are included in visuals.

To create a cascading filter, you set up an action, not a filter. This approach is because you need to define how the cascading filter is activated, which fields are involved, and which visuals are filtered when someone activates it. For more information, including step-by-step instructions, see [Using custom actions for filtering and navigating](quicksight-actions.md).

There are two other ways to activate a filter across multiple visuals:
+ **For a filter that is activated from a widget on a dashboard ** – The widget is called a *sheet control,* which is a custom menu that you can add to the top of your analysis or dashboard. The most common sheet control is a drop-down list, which displays a list of options to choose from when you open it. To add one of these to your analysis, create a parameter, add a control to the parameter, and then add a filter that uses the parameter. For more information, see [Setting up parameters in Amazon Quick](parameters-set-up.md), [Using a control with a parameter in Amazon Quick](parameters-controls.md), and [Adding filter controls to analysis sheets](filter-controls.md).
+ **For a filter that always applies to multiple visuals ** – This is a regular filter, except that you set its scope to apply to multiple (or all) visuals. This type of filter doesn't really cascade, because there is no trigger point. It always filters all the visuals that it's configured to filter. To add this type of filter to your analysis, create or edit a filter and then choose its scope: **Single visual**, **Single sheet**, or **Cross sheets**. Note the option to **Apply cross-datasets**. If this box is checked, then the filter will be applied to all visuals from different datasets that are applicable on all sheets in the filter scope. For more information, see [Filters](cross-sheet-filters.md#filters). 

# Adding filter controls to analysis sheets
Adding filter controls

When you're designing an analysis, you can add a filter to the analysis sheet near the visuals that you want to filter. It appears in the sheet as a control that dashboard viewers can use when you publish the analysis as a dashboard. The control uses the analysis theme settings so it looks like it's part of the sheet.

Filter controls share some settings with their filters. They apply to one, some, or all of the objects on the same sheet.

Use the following sections add and customize filter controls to an analysis. To learn how to add cross-sheet controls, see [Controls](cross-sheet-filters.md#cross-sheet-controls).

**Topics**
+ [

## Adding filter controls
](#filter-controls-add)
+ [

## Pinning filter controls to the top of a sheet
](#filter-controls-pin)
+ [

## Customizing filter controls
](#filter-controls-customize)
+ [

## Cascading filter controls
](#cascading-controls)

## Adding filter controls


Use the following procedure to add a filter control.

**To add a filter control**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Analyses**, and then choose the analysis that you want to work with.

1. In the analysis, choose **Filter**.

1. If you don't already have some filters available, create one. For more information about creating filters, see [Adding filters](add-a-filter-data-prep.md).

1. In the **Filters** pane, choose the three dots to the right of the filter that you want to add a control for, and choose **Add to sheet**.

   The filter control is added to the sheet, usually at the bottom. You can resize it or drag it to different positions on the sheet. You can also customize how it appears and how dashboard viewers can interact with it. For more information about customizing filter controls, see the following sections.

## Pinning filter controls to the top of a sheet


Use the following procedure to pin filter controls to the top of a sheet.

**To pin a control to the top of a sheet**

1. On the filter control that you want to move, choose the three dots next to the pencil icon and choose **Pin to top**.

   The filter is pinned to the top of the sheet and is collapsed. You can click it to expand it.

1. (Optional) To unpin the control, expand it and hover over it at the top of the sheet until three dots appear. Choose the three dots, and then choose **Move to sheet**.

## Customizing filter controls


Depending on the data type of the field and the type of filter, filter controls have different settings available. You can customize how they appear in the sheet and how dashboard viewers can interact with them. 

**To customize a filter control**

1. Choose the filter control in the sheet.

1. On the filter control, choose the pencil icon.

   If the filter control is pinned to the top of the sheet, expand it and hover your cursor over it until the three dots appear. Choose the three dots, and then choose **Edit**.

1. In the **Format control** pane that opens, do the following:

   1. For **Display name**, enter a name for the filter control.

   1. (Optional) To hide the display name from the filter control, clear the check box for **Show title**.

   1. For **Title font size**, choose the title font size that you want to use. The options range from extra small to extra large. The default setting is medium.

The remaining steps depend on the type of field the control is referencing. For options by filter type, see the following sections.

### Date filters


If your filter control is from a date filter, use the following procedure to customize the remaining options.

**To customize further options for a date filter**

1. In the **Format control** pane, for **Style**, choose one of the following options:
   + **Date picker – range** – Displays a set of two fields to define a time range. You can enter a date or time, or you can choose a date from the calendar control. You can also customize how you want the dates to appear in the control by entering a date token for **Date format**. For more information, see [Customizing date formats in Quick](format-visual-date-controls.md).
   + **Date picker – relative** – Displays settings like the time period, its relation to the current date and time, and the option to exclude time periods. You can also customize how you want the dates to appear in the control by entering a date token for **Date format**. For more information, see [Customizing date formats in Quick](format-visual-date-controls.md).
   + **Text field** – Displays a box where you can enter the top or bottom *N* date.

     Helper text is included in the text field control by default, but you can choose to remove it by clearing the **Show helper text in control** option.

   By default, Quick visuals are reloaded whenever a change is made to a control. For Calendar and Relative date picker controls, authors can add an **Apply** button to a control that delays visual reload until the user chooses **Apply**. This allows users to make multiple changes at a time without additional queries. This setting can be configured with the **Show an apply button** checkbox in the **Control options** section of the **Format control** pane.

1. When finished, choose **Apply**.

### Text filters


If your filter control is from a text filter, for example dimensions, categories, or labels, use the following procedure to customize the remaining options.

**To customize further options for a text filter**

1. In the **Format control** pane, for **Style**, choose one of the following options:
   + **Dropdown** – Displays a dropdown list with buttons that you can use to select a single value.

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     You can also choose to **Hide Select all option from the control values**. This removes the option to select or clear the selection of all values in the filter control.
   + **Dropdown - multiselect** – Displays a dropdown list with boxes that you can use to select multiple values. 

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     By default, Quick visuals are reloaded whenever a change is made to a control. For Multiselect dropdown controls, authors can add an **Apply** button to a control that delays visual reload until the user chooses **Apply**. This allows users to make multiple changes at a time without additional queries. This setting can be configured with the **Show an apply button** checkbox in the **Control options** section of the **Format control** pane.
   + **List** – Displays a list with buttons that you can use to select a single value.

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     You can also choose the following:
     + **Hide search bar when control is on sheet** – Hides the search bar in the filter control, so users can't search for specific values.
     + **Hide Select all option from the control values** – Removes the option to select or clear the selection of all values in the filter control.
   + **List - multiselect** – Displays a list with boxes that you can use to select multiple values. 

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     You can also choose the following:
     + **Hide search bar when control is on sheet** – Hides the search bar in the filter control, so users can't search for specific values.
     + **Hide Select all option from the control values** – Removes the option to select or clear the selection of all values in the filter control.
   + **Text field** – Displays a text box where you can enter a single entry. Text fields support up to 79950 characters.

     When you select this option, you can choose the following:
     + **Show helper text in control** – Removes the helper text in text fields.
   + **Text field - multiline** – Displays a text box where you can enter multiple entries. Multiline text fields support up to 79950 characters across all entries.

     When you select this option, you can choose the following:
     + For **Separate values by**, choose how you want to separate values you enter into the filter control. You can choose to separate values by a line break, comma, pipe (\$1), or semicolon.
     + **Show helper text in control** – Removes the helper text in text fields.

1. When finished, choose **Apply**.

### Numeric filters


If your filter control is from a numeric filter, use the following procedure to customize the remaining options.

**To customize further options for a numeric filter**

1. In the **Format control** pane, for **Style**, choose one of the following options:
   + **Dropdown** – Displays a list where you can select a single value.

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     You can also choose to **Hide Select all option from the control values**. This removes the option to select or clear the selection of all values in the filter control.
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.
     + **Hide Select all option from the control values** – Removes the option to select or clear the selection of all values in the filter control.
   + **List** – Displays a list with buttons that enable selecting a single value. 

     When you select this option, you can choose the following options for **Values**:
     + **Filter** – Displays all the values that are available in the filter.
     + **Specific values** – Enables you to enter the values to display, one entry per line.

     You can also choose the following:
     + **Hide search bar when control is on sheet** – Hides the search bar in the filter control, so users can't search for specific values.
     + **Hide Select all option from the control values** – Removes the option to select or clear the selection of all values in the filter control.
   + **Slider** – Displays a horizontal bar with a toggle that you can slide to change the value. If you have a ranged filter for values between a minimum and a maximum, the slider provides a toggle for each number. For sliders, you can specify the following options:
     + **Minimum value** – Displays the smaller value at the left of the slider.
     + **Maximum value** – Displays the larger value at the right of the slider.
     + **Step size** – Enables you to set the number of increments that the bar is divided into.
   + **Text box** – Displays a box where you can enter the value. When you select this option, you can choose the following:
     + **Show helper text in control** – Removes the helper text in text fields.

1. When finished, choose **Apply**.

## Cascading filter controls


You can limit the values displayed in the control, so they only show values that are valid for what is selected in other controls. This is called a cascading control.

**When creating cascading controls, the following limitations apply:**

1. Cascading controls must be tied to dataset columns from the same dataset.

1. The child control must be a dropdown or list control.

1. For parameter controls, the child control must be linked to a dataset column.

1. For filter controls, the child control must be linked to a filter (instead of showing only specific values).

1. The parent control must be one of the following:

   1. A string, integer, or numeric parameter control.

   1. A string filter control (excluding top-bottom filters).

   1. A non-aggregated numeric filter control.

   1. A date filter control (excluding top-bottom filters).

**To create a cascading control**

1. Choose **Show relevant values only.** Note that this option might not be available for all filter control types.

1. In the **Show relevant values only** pane that opens, choose one or more controls from the available list.

1. Choose a field to match the value to.

1. Choose **Update**.

# Editing filters


You can edit filters at any time in a dataset or analysis.

You can't change the field a filter applies to. To apply a filter to a different field, create a new filter instead.

Use the following procedures to learn how to edit filters.

## Editing filters in datasets


Use the following procedure to edit filters in datasets.

**To edit a filter in a dataset**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Data** at left.

1. Under the **Datasets** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. On the data preparation page that opens, choose **Filters** at lower left.

1. Choose the filter that you want to edit.

1. When finished editing, choose **Apply**.

## Editing filters in analyses


Use the following procedure to edit filters in analyses.

**To edit a filter in an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Analyses** at left.

1. On the **Analyses** page, choose the analysis that you want to work with.

1. In the analysis, choose the **Filter** icon shown to open the **Filters** pane.

1. Choose the filter that you want to edit.

1. When finished editing, choose **Apply**.

# Enabling or disabling filters


You can use the filter menu to enable or disable a filter in a dataset or an analysis. When you create a filter, it's enabled by default. Disabling a filter removes the filter from the field, but it doesn't delete the filter from the dataset or analysis. Disabled filters are grayed out in the filters pane. If you want to re-apply the filter to the field, you can simply enable it.

Use the following procedures to learn how to enable or disable filters.

## Disabling filters in datasets


Use the following procedure to disable filters in datasets.

**To disable a filter in a dataset**

1. From the Quick homepage, choose **Data** at left.

1. Under the **Datasets** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. On the data preparation page that opens, choose **Filters** at lower left.

1. In the **Filters** pane at left, choose the three dots to the right of the filter that you want to disable, and then choose **Disable**. To enable a filter that was disabled, choose **Enable**.

## Disabling filters in analyses


Use the following procedure to disable filters in analyses.

**To disable a filter in an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Analyses**.

1. On the **Analyses** page, choose the analysis that you want to work with.

1. In the analysis, choose the **Filter** icon to open the **Filters** pane.

1. In the **Filters** pane that opens, choose the three dots to the right of the filter that you want to disable, and then choose **Disable**. To enable a filter that was disabled, choose **Enable**.

# Deleting filters


You can delete filters at any time in a dataset or analysis. Use the following procedures to learn how.

## Deleting filters in datasets


Use the following procedure to delete filters in datasets.

**To delete a filter in a dataset**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Data**.

1. Under the **Datasets** tab, choose the dataset that you want, and then choose **Edit dataset**.

1. On the data preparation page that opens, choose **Filters** at lower left.

1. Choose the filter that you want to delete, and then choose **Delete filter**.

## Deleting filters in analyses


Use the following procedure to delete filters in analyses.

**To delete a filter in an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the Quick homepage, choose **Analyses**.

1. On the **Analyses** page, choose the analysis that you want to work with.

1. In the analysis, choose the **Filter** icon to open the **Filters** pane.

1. Choose the filter that you want to delete, and then choose **Delete filter**.

# Previewing tables in a dataset


You can preview each individual data table within a dataset. When you choose a data table to preview, a read-only preview of the table appears in a new tab in the data preview section. You can have multiple table preview tabs open at once.

You can only preview tables that you have access to in a dataset. If a table doesn't appear in the top half of the data preparation space, you can't preview the table.

The **Dataset** tab contains all transformations, like new columns or filters. Table preview tabs don't show any of your transforms.

**To preview a data table**

1. On the Quick homepage, choose **Data** at left.

1. In the **Data** tab, choose the dataset that you want, and choose **Edit dataset**.

1. Choose the data table that you want to preview, choose the down arrow to open the menu, and choose **Show table preview**.