

# Adding calculations


Create calculated fields to transform your data by using one or more of the following: 
+ [Operators](arithmetic-and-comparison-operators.md)
+ [Functions](functions.md)
+ Fields that contain data
+ Other calculated fields

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. For more information about adding calculated fields, see the following topics.

**Topics**
+ [

# Adding calculated fields
](adding-a-calculated-field-analysis.md)
+ [

# Order of evaluation in Amazon Quick Sight
](order-of-evaluation-quicksight.md)
+ [

# Using level-aware calculations in Quick Sight
](level-aware-calculations.md)
+ [

# Calculated field function and operator reference for Amazon Quick
](calculated-field-reference.md)

# Adding calculated fields


Create calculated fields to transform your data by using one or more of the following: 
+ [Operators](arithmetic-and-comparison-operators.md)
+ [Functions](functions.md)
+ Aggregate functions (you can only add these to an analysis)
+ Fields that contain data
+ Other calculated fields

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. 

Analyses support both single-row operations and aggregate operations. Single-row operations are those that supply a (potentially) different result for every row. Aggregate operations supply results that are always the same for entire sets of rows. For example, if you use a simple string function with no conditions, it changes every row. If you use an aggregate function, it applies to all the rows in a group. If you ask for the total sales amount for the US, the same number applies to the entire set. If you ask for data on a particular state, the total sales amount changes to reflect your new grouping. It still provides one result for the entire set.

By creating the aggregated calculated field within the analysis, you can then drill down into the data. The value of that aggregated field is recalculated appropriately for each level. This type of aggregation isn't possible during dataset preparation.

For example, let's say that you want to figure out the percentage of profit for each country, region, and state. You can add a calculated field to your analysis, `(sum(salesAmount - cost)) / sum(salesAmount)`. This field is then calculated for each country, region, and state, at the time your analyst drills down into the geography.

**Topics**
+ [

## Adding calculated fields to an analysis
](#using-the-calculated-field-editor-analysis)
+ [

## Adding calculated fields to a dataset
](#using-the-calculated-field-editor)
+ [

## Handling decimal values in calculated fields
](#handling-decimal-fields)

## Adding calculated fields to an analysis


When you add a dataset to an analysis, every calculated field that exists in the dataset is added to the analysis. You can add additional calculated fields at the analysis level to create calculated fields that are available only in that analysis.

**To add a calculated field to an analysis**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. Open the analysis that you want to change.

1. In the **Data** pane, choose **Add** at top left, and then choose **\$1 CALCULATED FIELD**.

   1. In the calculations editor that opens, do the following:

   1. Enter a name for the calculated field.

   1. Enter a formula using fields from your dataset, functions, and operators.

1. When finished, choose **Save**.

For more information about how to create formulas using the available functions in Quick Sight, see [Calculated field function and operator reference for Amazon QuickFunctions and operators](calculated-field-reference.md).

## Adding calculated fields to a dataset


Amazon Quick Sight authors can genreate calculated fields during the data preparation phase of a dataset's creation. When you create a calculated field for a dataset, the field becomes a new column in the dataset. All analyses that use the dataset inherit the dataset's calculated fields.

If the calculated field operates at the row level and the dataset is stored in SPICE, Quick Sight computes and materializes the result in SPICE. If the calculated field relies on an aggregation function, Quick Sight retains the formula and performs the calculation when the analysis is generated. This type of calculated field is called an unmaterialized calculated field.

**To add or edit a calculated field for a dataset**

1. Open the dataset that you want to work with. For more information, see [Editing datasets](edit-a-data-set.md).

1. On the data prep page, do one of the following:
   + To create a new field, choose **Add calculated field** at left.
   + To edit an existing calculated field, choose it from **Calculated fields** at left, then choose **Edit** from the context (right-click) menu.

1. In the calculation editor, enter a descriptive name for **Add title** to name the new calculated field. This name appears in the field list in the dataset, so it should look similar to the other fields. For this example, we name the field `Total Sales This Year`.

1. (Optional) Add a comment, for example to explain what the expression does, by enclosing text in slashes and asterisks.

   ```
   /* Calculates sales per year for this year*/
   ```

1. Identify the metrics, functions, and other items to use. For this example, we need to identify the following:
   + The metric to use
   + Functions: `ifelse` and `datediff`

   We want to build a statement like "If the sale happened during this year, show the total sales, and otherwise show 0."

   To add the `ifelse` function, open the **Functions** list. Choose **All** to close the list of all functions. Now you should see the function groups: **Aggregate**, **Conditional**, **Date**, and so on. 

   Choose **Conditional**, and then double-click on `ifelse` to add it to the workspace. 

   ```
   ifelse()
   ```

1. Place your cursor inside the parenthesis in the workspace, and add three blank lines.

   ```
   ifelse(
                                               
                                               
                                               
   )
   ```

1. With your cursor on the first blank line, find the `dateDiff` function. It's listed for **Functions** under **Dates**. You can also find it by entering **date** for **Search functions**. The `dateDiff` function returns all functions that have *`date`* as part of their name. It doesn't return all functions listed under **Dates**; for example, the `now` function is missing from the search results.

   Double-click on `dateDiff` to add it to the first blank line of the `ifelse` statement. 

   ```
   ifelse(
   dateDiff()                                            
                                               
                                               
   )
   ```

   Add the parameters that `dateDiff` uses. Place your cursor inside the `dateDiff` parentheses to begin to add `date1`, `date2`, and `period`:

   1. For `date1`: The first parameter is the field that has the date in it. Find it under **Fields**, and add it to the workspace by double-clicking it or entering its name. 

   1. For `date2`, add a comma, then choose `truncDate()` for **Functions**. Inside its parenthesis, add period and date, like this: **truncDate( "YYYY", now() )**

   1. For `period`: Add a comma after `date2` and enter **YYYY**. This is the period for the year. To see a list of all the supported periods, find `dateDiff` in the **Functions** list, and open the documentation by choosing **Learn more**. If you're already viewing the documentation, as you are now, see [dateDiff](dateDiff-function.md).

   Add a few spaces for readability, if you like. Your expression should look like the following.

   ```
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" )                                       
                                               
                                               
   )
   ```

1. Specify the return value. For our example, the first parameter in `ifelse` needs to return a value of `TRUE` or `FALSE`. Because we want the current year, and we're comparing it to this year, we specify that the `dateDiff` statement should return `0`. The `if` part of the `ifelse` evaluates as true for rows where there is no difference between the year of the sale and the current year.

   ```
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 
   ```

   To create a field for `TotalSales` for last year, you can change `0` to `1`.

   Another way to do the same thing is to use `addDateTime` instead of `truncDate`. Then for each previous year, you change the first parameter for `addDateTime` to represent each year. For this, you use `-1` for last year, `-2` for the year before that, and so on. If you use `addDateTime`, you leave the `dateDiff` function `= 0` for each year.

   ```
      dateDiff( {Discharge Date}, addDateTime(-1, "YYYY", now() ) ,"YYYY" ) = 0 /* Last year */
   ```

1. Move your cursor to the first blank line, just under `dateDiff`. Add a comma. 

   For the `then` part of the `ifelse` statement, we need to choose the measure (metric) that contains the sales amount, `TotalSales`.

   To choose a field, open the **Fields** list and double-click a field to add it to the screen. Or you can enter the name. Add curly braces `{ }` around names that contain spaces. It's likely that your metric has a different name. You can know which field is a metric by the number sign in front of it (**\$1**).

   Your expression should look like the following now.

   ```
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0
      ,{TotalSales}                            
                                              
   )
   ```

1. Add an `else` clause. The `ifelse` function doesn't require one, but we want to add it. For reporting purposes, you usually don't want to have any null values, because sometimes rows with nulls are omitted. 

   We set the else part of the ifelse to `0`. The result is that this field is `0` for rows that contain sales from previous years.

   To do this, on the blank line add a comma and then a `0`. If you added the comment at the beginning, your finished `ifelse` expression should look like the following.

   ```
   /* Calculates sales per year for this year*/
   ifelse(
      dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0
      ,{TotalSales}                            
      ,0                                         
   )
   ```

1. Save your work by choosing **Save** at upper right. 

   If there are errors in your expression, the editor displays an error message at the bottom. Check your expression for a red squiggly line, then hover your cursor over that line to see what the error message is. Common errors include missing punctuation, missing parameters, misspellings, and invalid data types.

   To avoid making any changes, choose **Cancel**.

**To add a parameter value to a calculated field**

1. You can reference parameters in calculated fields. By adding the parameter to your expression, you add the current value of that parameter.

1. To add a parameter, open the **Parameters** list, and select the parameter whose value you want to include. 

1. (Optional) To manually add a parameter to the expression, type the name of the parameter. Then enclosed it in curly braces `{}`, and prefix it with a `$`, for example `${parameterName}`.

You can change the data type of any field in your dataset, including the types of calculated fields. You can only choose data types that match the data that's in the field.

**To change the data type of a calculated field**
+ For **Calculated fields** (at left), choose the field that you want to change, then choose **Change data type** from the context (right-click) menu.

Unlike the other fields in the dataset, calculated fields can't be disabled. Instead, delete them. 

**To delete a calculated field**
+ For **Calculated fields** (at left), choose the field that you want to change, then choose **Delete** from the context (right-click) menu.

## Handling decimal values in calculated fields


When your dataset uses Direct Query mode, the calculation of the decimal data type is determined by the behavior of the source engine that the dataset originates from. In some particular cases, Quick Sight applies special handlings to determine the output calculation's data type.

When your dataset uses SPICE query mode and a calculated field is materialized, the data type of the result is contingent on the specific function operators and the data type of the input. The tables below show the expected bahavior for some numeric calculated fields.

**Unary operators**

The following table shows which data type is output based on the operator you use and the data type of the value that you input. For example, if you input an integer to an `abs` calculation, the output value's data type is integer.


****  
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/adding-a-calculated-field-analysis.html)

**Binary operators**

The following tables show which data type is output based on the data types of the two values that you input. For example, for an arithmetic operator, if you provide two integer data types, the result of the calculation output as an integer.

For basic operators (\$1, -, \$1):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Integer  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-fixed  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

For division operators (/):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-float  |  Decimal-fixed  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

For exponential and mod operators (^, %):


|  | **Integer** | **Decimal-fixed** | **Decimal-float** | 
| --- | --- | --- | --- | 
|  **Integer**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-fixed**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 
|  **Decimal-float**  |  Decimal-float  |  Decimal-float  |  Decimal-float  | 

# Order of evaluation in Amazon Quick Sight
Order of evaluation

When you open or update an analysis, before displaying it Amazon Quick Sight evaluates everything that is configured in the analysis in a specific sequence. Amazon Quick Sight translates the configuration into a query that a database engine can run. The query returns the data in a similar way whether you connect to a database, a software as a service (SaaS) source, or the Amazon Quick Sight analytics engine ([SPICE](spice.md)). 

If you understand the order that the configuration is evaluated in, you know the sequence that dictates when a specific filter or calculation is applied to your data.

The following illustration shows the order of evaluation. The column on the left shows the order of evaluation when no level aware calculation window (LAC-W) nor aggregate (LAC-A) function is involved. The second column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the prefilter (`PRE_FILTER`) level. The third column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the preaggregate (`PRE_AGG`) level. The last column shows the order of evaluation for analyses that contain calculated fields to compute LAC-A expressions. Following the illustration, there is a more detailed explanation of the order of evaluation. For more information about level aware calculations, see [Using level-aware calculations in Quick Sight](level-aware-calculations.md).

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/order-of-evaluation2.png)


The following list shows the sequence in which Amazon Quick Sight applies the configuration in your analysis. Anything that's set up in your data set happens outside your analysis, for example calculations at the data set level, filters, and security settings. These all apply to the underlying data. The following list only covers what happens inside the analysis. 

1. **LAC-W Prefilter level**: Evaluates the data at the original table cardinality before analysis filters

   1. **Simple calculations**: Calculations at scalar level without any aggregations or window calculations. For example, `date_metric/60, parseDate(date, 'yyyy/MM/dd'), ifelse(metric > 0, metric, 0), split(string_column, '|' 0)`.

   1. **LAC-W function PRE\$1FILTER**: If any LAC-W PRE\$1FILTER expression is involved in the visual, Amazon Quick Sight firstly computes the window function at the original table level, before any filters. If the LAC-W PRE\$1FILTER expression is used in filters, it is applied at this point. For example, `maxOver(Population, [State, County], PRE_FILTER) > 1000`.

1. **LAC-W PRE\$1AGG**: Evaluates the data at the original table cardinality before aggregations

   1. **Filters added during analysis**: Filters created for un-aggregated fields in the visuals are applied at this point, which are similar to WHERE clauses. For example, `year > 2020`.

   1. **LAC-W function PRE\$1AGG**: If any LAC-W PRE\$1AGG expression is involved in the visual, Amazon Quick Sight computes the window function before any aggregation applied. If the LAC-W PRE\$1AGG expression is used in filters, it is applied at this point. For example, `maxOver(Population, [State, County], PRE_AGG) > 1000`.

   1. **Top/bottom N filters**: Filters that are configured on dimensions to display top/bottom N items.

1. **LAC-A level**: Evaluate aggregations at customized level, before visual aggregations

   1. **Custom-level aggregations**: If any LAC-A expression is involved in the visual, it is calculated at this point. Based on the table after the filters mentioned above, Amazon QuickSight computes the aggregation, grouped by the dimensions that are specified in the calculated fields. For example, `max(Sales, [Region])`.

1. **Visual level**: Evaluates aggregations at visual level, and post-aggregation table calculations, with the remaining configurations applied in the visuals

   1. **Visual-level aggregations**: Visual aggregations should always be applied except for tabular tables (where dimension is empty). With this setting, aggregations based on the fields in the field wells are calculated, grouped by the dimensions that put into the visuals. If any filter is built on top of aggregations, it is applied at this point, similar to HAVING clauses. For example, `min(distance) > 100`.

   1. **Table calculations**: If there is any post-aggregation table calculation (it should take aggregated expression as operand) referenced in the visual, it is calculated at this point. Amazon Quick Sight performs window calculations after visual aggregations. Similarly, filters built on such calculations are applied.

   1. **Other category calculations**: This type of calculation only exists in line/bar/pie/donut charts. For more information, see [Display limits](working-with-visual-types.md#display-limits).

   1. **Totals and subtotals**: Totals and Subtotals are calculated in donut charts (only totals), tables (only totals) and pivot tables, if requested.

# Using level-aware calculations in Quick Sight
Level-aware calculations


|  | 
| --- |
|    Applies to: Enterprise Edition and Standard Edition  | 

With *Level-aware calculations* (LAC) you can specify the level of granularity that you want to compute window functions or aggregate functions. There are two types of LAC functions: level-aware calculation - aggregate (LAC-A) functions, and level-aware calculation - window (LAC-W) functions.

**Topics**
+ [LAC-A functions](#level-aware-calculations-aggregate)
+ [LAC-W functions](#level-aware-calculations-window)

## Level-aware calculation - aggregate (LAC-A) functions
LAC-A functions

With LAC-A functions, you can specify at what level to group the computation. By adding one argument into an existing aggregate function, such as `sum() , max() , count()`, you can define any group-by level that you want for the aggregation. The level added can be any dimension independent of the dimensions added to the visual. For example:

```
sum(measure,[group_field_A])
```

To use LAC-A functions, type them directly in the calculation editor by adding the intended aggregation levels as the second argument between brackets. Following is an example of an aggregate function and a LAC-A function, for comparison.
+ Aggregate function: `sum({sales})`
+ LAC-A function: `sum({sales}, [{Country},{Product}])`

The LAC-A results are computed with the specified level in the brackets `[ ]`, can be used as operand of an aggregate function. The group-by level of the aggregate function is visual level, with **Group by** fields added to the field well of the visual. 

In addition to creating a static LAC group key in the bracket `[ ]`, you can make it dynamically adapted to visual group-by fields, by putting a parameter `$visualDimensions` in the bracket. This is a system-provided parameter, in contrast to user-defined parameter. The `[$visualDimensions]`parameter represents the fields added to the **Group by** field well in current visual. The following examples show how to dynamically add group keys to the visual dimensions or remove group keys from visual dimensions
+ LAC-A with dynamic-added group key : `sum({sales}, [${visualDimensions},{Country},{Products}])`

  It calculates, before the visual level aggregation is calculated, the sum of sales, grouping by `country`, `products`, and any other fields in the **Group by** field well . 
+ LAC-A with dynamic-removed group key : `sum({sales}, [${visualDimensions},!{Country},!{Products}])` 

  It calculates, before visual level aggregation is calculated, the sum of sales, grouping by the fields in the visual's **Group by** field well, except `country` and `product`. 

You can specify added group key or removed group key in on LAC expression, but not both.

LAC-A functions are supported for the following aggregate functions:
+ [avg](avg-function.md)
+ [count](count-function.md)
+ [distinct\$1count](distinct_count-function.md)
+ [max](max-function.md)
+ [median](median-function.md)
+ [min](min-function.md)
+ [percentile](percentile-function.md)
+ [percentileCont](percentileCont-function.md)
+ [percentileDisc (percentile)](percentileDisc-function.md)
+ [stdev](stdev-function.md)
+ [stdevp](stdevp-function.md)
+ [sum](sum-function.md)
+ [var](var-function.md)
+ [varp](varp-function.md)

### LAC-A examples


You can do the following with LAC-A functions:
+ Run calculations that are independent of the levels in the visual. For example, if you have the following calculation, the sales numbers are aggregated only at the country level, but not across other dimensions (Region or Product) in the visual.

  ```
  sum({Sales},[{Country}])
  ```
+ Run calculations for the dimensions that are not in the visual. For example, if you have the following function, you can calculate the average total country sales by region.

  ```
  sum({Sales},[{Country}])
  ```

  Though Country is not included in the visual, the LAC-A function first aggregates the sales at the Country level and then the visual level calculation generates the average number for each region. If the LAC-A function isn't used to specify the level, the average sales are calculated at the lowest granular level (the base level of the dataset) for each region (showing in the sales column).
+ Use LAC-A combined with other aggregate functions and LAC-W functions. There are two ways you can nest LAC-A functions with other functions.
  + You can write a nested syntax when you create a calculation. For example, the LAC-A function can be nested with a LAC-W function to calculate the total sales by country of each product's average price:

    ```
    sum(avgOver({Sales},[{Product}],PRE_AGG),[{Country}])
    ```
  + When adding a LAC-A function into a visual, the calculation can be further nested with visual-level aggregate functions that you selected in the fields well. For more information about changing the aggregation of fields in the visual, see [Changing or adding aggregation to a field by using a field well](changing-field-aggregation.md#change-field-aggregation-field-wells).

### LAC-A limitations


The following limitations apply to LAC-A functions:
+ LAC-A functions are supported for all additive and non-additive aggregate functions, such as `sum()`, `count()`, and `percentile()`. LAC-A functions are not supported for conditional aggregate functions that end with "if", such as `sumif()` and `countif()`, nor for period aggregate functions that start with "periodToDate", such as `periodToDateSum()` and `periodToDateMax()`.
+ Row-level and column-level totals are not currently supported for for LAC-A functions in tables and pivot tables. When you add row-level or column-level totals to the chart, the total number will show as blank. Other non-LAC dimensions are not impacted.
+ Nested LAC-A functions are not currently supported. A limited capability of LAC-A functions nested with regular aggregate functions and LAC-W functions are supported.

  For example, the following functions are valid:
  + `Aggregation(LAC-A())`. For example: `max(sum({sales}, [{country}]))`
  + `LAC-A(LAC-W())`. For example: `sum(sumOver({Sales},[{Product}],PRE_AGG), [{Country}])`

  The following functions are not valid:
  + `LAC-A(Aggregation())`. For example: `sum(max({sales}), [{country}])`
  + `LAC-A(LAC-A())`. For example: `sum(max({sales}, [{country}]),[category])`
  + `LAC-W(LAC-A())`. For example: `sumOver(sum({Sales},[{Product}]),[{Country}],PRE_AGG)`

## Level-aware calculation - window (LAC-W) functions
LAC-W functions

With LAC-W functions, you can specify the window or partition to compute the calculation. LAC-W functions are a group of window functions, such as `sumover()`, `(maxover)`, `denseRank`, that you can run at the prefilter or preaggregate level. For example: `sumOver(measure,[partition_field_A],pre_agg)`.

LAC-W functions used to be called level aware aggregations (LAA).

LAC-W functions help you to answer the following types of questions:
+ How many of my customers made only 1 purchase order? Or 10? Or 50? We want the visual to use the count as a dimension rather than a metric in the visual.
+ What are the total sales per market segment for customers whose lifetime spend is greater than \$1100,000? The visual should only show the market segment and the total sales for each.
+ How much is the contribution of each industry to the entire company's profit (percent of total)? We want to be able to filter the visual to show some of the industries, and how they contribute to the total sales for the displayed industries. However, we also want to see each industry's percent of total sales for the entire company (including industries that are filtered out). 
+ What are the total sales in each category as compared to the industry average? The industry average should include all of the categories, even after filtering.
+ How are my customers grouped into cumulative spending ranges? We want to use the grouping as a dimension rather than a metric. 

For more complex questions, you can inject a calculation or filter before Quick Sight gets to a specific point in its evaluation of your settings. To directly influence your results, you add a calculation level keyword to a table calculation. For more information on how Quick Sight evaluates queries, see [Order of evaluation in Amazon Quick Sight](order-of-evaluation-quicksight.md).

The following calculation levels are supported for LAC-W functions:
+ **`PRE_FILTER`** – Before applying filters from the analysis, Quick Sight evaluates prefilter calculations. Then it applies any filters that are configured on these prefilter calculations.
+ **`PRE_AGG`** – Before computing display-level aggregations, Quick Sight performs preaggregate calculations. Then it applies any filters that are configured on these preaggregate calculations. This work happens before applying top and bottom *N* filters.

You can use the `PRE_FILTER` or `PRE_AGG` keyword as a parameter in the following table calculation functions. When you specify a calculation level, you use an unaggregated measure in the function. For example, you can use `countOver({ORDER ID}, [{Customer ID}], PRE_AGG)`. By using `PRE_AGG`, you specify that the `countOver` executes at the preaggregate level. 
+ [avgOver](avgOver-function.md)
+ [countOver](countOver-function.md)
+ [denseRank](denseRank-function.md)
+ [distinctCountOver](distinctCountOver-function.md)
+ [minOver](minOver-function.md)
+ [maxOver](maxOver-function.md)
+ [percentileRank](percentileRank-function.md)
+ [rank](rank-function.md)
+ [stdevOver](stdevOver-function.md)
+ [stdevpOver](stdevpOver-function.md)
+ [sumOver](sumOver-function.md)
+ [varOver](varOver-function.md)
+ [varpOver](varpOver-function.md)

By default, the first parameter for each function must be an aggregated measure. If you use either `PRE_FILTER` or `PRE_AGG`, you use a nonaggregated measure for the first parameter. 

For LAC-W functions, the visual aggregation defaults to `MIN` to eliminate duplicates. To change the aggregation, open the field's context (right-click) menu, and then choose a different aggregation.

For examples of when and how to use LAC-W functions in real life scenarios, see the following post in the Amazon Big Data Blog: [Create advanced insights using Level Aware Aggregations in Amazon QuickSight.](https://aws.amazon.com/jp/blogs/big-data/create-advanced-insights-using-level-aware-aggregations-in-amazon-quicksight/) 

# Calculated field function and operator reference for Amazon Quick
Functions and operators

You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. 

You can create calculated fields to transform your data by using the following functions and operators.

**Topics**
+ [

# Operators
](arithmetic-and-comparison-operators.md)
+ [

# Functions by category
](functions-by-category.md)
+ [

# Functions
](functions.md)
+ [

# Aggregate functions
](calculated-field-aggregations.md)
+ [

# Table calculation functions
](table-calculation-functions.md)

# Operators


You can use the following operators in calculated fields. Quick uses the standard order of operations: parentheses, exponents, multiplication, division, addition, subtraction (PEMDAS). Equal (=) and not equal (<>) comparisons are case-sensitive. 
+ Addition (\$1)
+ Subtraction (−)
+ Multiplication (\$1)
+ Division (/)
+ Modulo (%) – See also `mod()` in the following list.
+ Power (^) – See also `exp()` in the following list.
+ Equal (=)
+ Not equal (<>)
+ Greater than (>)
+ Greater than or equal to (>=)
+ Less than (<)
+ Less than or equal to (<=)
+ AND
+ OR
+ NOT

Amazon Quick supports applying the following mathematical functions to an expression.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html](https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html)(number, divisor)` – Finds the remainder after dividing a number by a divisor.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/log-function.html](https://docs.amazonaws.cn/quicksight/latest/user/log-function.html)(expression) `– Returns the base 10 logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html)(expression) `– Returns the natural logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html](https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html)(expression) `– Returns the absolute value of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html)(expression) `– Returns the square root of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html)(expression) `– Returns the base of natural log *e* raised to the power of a given expression. 

To make lengthy calculations easier to read, you can use parenthesis to clarify groupings and precedence in calculations. In the following statement, you don't need parentheses. The multiplication statement is processed first, and then the result is added to five, returning a value of 26. However, parentheses make the statement easier to read and thus maintain.

```
5 + (7 * 3)
```

Because parenthesis are first in the order of operations, you can change the order in which other operators are applied. For example, in the following statement the addition statement is processed first, and then the result is multiplied by three, returning a value of 36.

```
(5 + 7) * 3
```

## Example: Arithmetic operators
Multiple operators

The following example uses multiple arithmetic operators to determine a sales total after discount.

```
(Quantity * Amount) - Discount
```

## Example: (/) Division
/ (Division)

The following example uses division to divide 3 by 2. A value of 1.5 is returned. Amazon Quick uses floating point divisions.

```
3/2
```

## Example: (=) equal
= (equal)

Using = performs a case-sensitive comparison of values. Rows where the comparison is TRUE are included in the result set. 

In the following example, rows where the `Region` field is **South** are included in the results. If the `Region` is **south**, these rows are excluded.

```
Region = 'South'
```

In the following example, the comparison evaluates to FALSE. 

```
Region = 'south'
```

The following example shows a comparison that converts `Region` to all uppercase (**SOUTH**), and compares it to **SOUTH**. This returns rows where the region is **south**, **South**, or **SOUTH**.

```
toUpper(Region) = 'SOUTH'
```

## Example: (<>)
<> (not equal)

The not equal symbol <> means *less than or greater than*. 

So, if we say **x<>1**, then we are saying *if x is less than 1 OR if x is greater than 1*. Both < and > are evaluated together. In other words, *if x is any value except 1*. Or, *x is not equal to 1*. 

**Note**  
Use <>, not \$1=.

The following example compares `Status Code` to a numeric value. This returns rows where the `Status Code` is not equal to **1**.

```
statusCode <> 1
```

The following example compares multiple `statusCode` values. In this case, active records have `activeFlag = 1`. This example returns rows where one of the following applies:
+ For active records, show rows where the status isn't 1 or 2
+ For inactive records, show rows where the status is 99 or -1

```
( activeFlag = 1 AND (statusCode <> 1 AND statusCode <> 2) )
OR
( activeFlag = 0 AND (statusCode= 99 OR statusCode= -1) )
```

## Example: (^)
^ (Power)

The power symbol `^` means *to the power of*. You can use the power operator with any numeric field, with any valid exponent. 

The following example is a simple expression of 2 to the power of 4 or (2 \$1 2 \$1 2 \$1 2). This returns a value of 16.

```
2^4
```

The following example computes the square root of the revenue field.

```
revenue^0.5
```

## Example: AND, OR, and NOT
Use the AND, OR, and NOT operators to refine your selection criteria. these operators are helpful when you need to show relationships between different comparisons.

The following example uses AND, OR, and NOT to compare multiple expressions. It does so using conditional operators to tag top customers NOT in Washington or Oregon with a special promotion, who made more than 10 orders. If no values are returned, the value 'n/a' is used.

```
ifelse(( (NOT (State = 'WA' OR State = 'OR')) AND Orders > 10), 'Special Promotion XYZ', 'n/a')
```

## Example: Creating comparison lists like "in" or "not in"
in/not in

This example uses operators to create a comparison to find values that exist, or don't exist, in a specified list of values.

The following example compares `promoCode` a specified list of values. This example returns rows where the `promoCode` is in the list **(1, 2, 3)**.

```
promoCode    = 1
OR promoCode = 2
OR promoCode = 3
```

The following example compares `promoCode` a specified list of values. This example returns rows where the `promoCode` is NOT in the list **(1, 2, 3)**.

```
NOT(promoCode = 1
OR promoCode  = 2
OR promoCode  = 3
)
```

Another way to express this is to provide a list where the `promoCode` is not equal to any items in the list.

```
promoCode     <> 1
AND promoCode <> 2
AND promoCode <> 3
```

## Example: Creating a "between" comparison
Between

This example uses comparison operators to create a comparison showing values that exist between one value and another.

The following example examines `OrderDate` and returns rows where the `OrderDate` is between the first day and last day of 2016. In this case, we want the first and last day included, so we use "or equal to" on the comparison operators. 

```
OrderDate >= "1/1/2016" AND OrderDate <= "12/31/2016"
```

# Functions by category


In this section, you can find a list of the functions available in Amazon Quick, sorted by category.

**Topics**
+ [

## Aggregate functions
](#aggregate-functions)
+ [

## Conditional functions
](#conditional-functions)
+ [

## Date functions
](#date-functions)
+ [

## Numeric functions
](#numeric-functions)
+ [

## Mathematical functions
](#mathematical-functions)
+ [

## String functions
](#string-functions)
+ [

## Table calculations
](#table-calculations)

## Aggregate functions


The aggregate functions for calculated fields in Amazon Quick include the following. These are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/avg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avg-function.html) averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html) calculates the average based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/count-function.html](https://docs.amazonaws.cn/quicksight/latest/user/count-function.html) calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html) calculates the count based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinct_count-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinct_count-function.html) calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html) calculates the distinct count based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/max-function.html](https://docs.amazonaws.cn/quicksight/latest/user/max-function.html) returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html) calculates the maximum based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/median-function.html](https://docs.amazonaws.cn/quicksight/latest/user/median-function.html) returns the median value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html) calculates the median based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/min-function.html](https://docs.amazonaws.cn/quicksight/latest/user/min-function.html) returns the minimum value of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html) calculates the minimum based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentile-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentile-function.html) (alias of `percentileDisc`) computes the *n*th percentile of the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html) calculates the *n*th percentile based on a continuous distribution of the numbers of the specified measure, grouped by the chosen dimension or dimensions. 
+ [percentileDisc (percentile)](https://docs.amazonaws.cn/quicksight/latest/user/percentileDisc-function.html) calculates the *n*th percentile based on the actual numbers of the specified measure, grouped by the chosen dimension or dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvg-function.html) averages the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCount-function.html) calculates the number of values in a dimension or measure for a given time granularity (for instance, Quarter) up to a point in time including duplicates.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMax-function.html) returns the maximum value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMedian-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMedian-function.html) returns the median value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMin-function.html) returns the minimum value of the specified measure or date for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentile-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentile-function.html) calculates the percentile based on the actual numbers in measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDatePercentileCont-function.html) calculates percentile based on a continuous distribution of the numbers in the measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDev-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDev-function.html) calculates the standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDevP-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateStDevP-function.html) calculates the population standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSum-function.html) adds the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVar-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVar-function.html) calculates the sample variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVarP-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateVarP-function.html) calculates the population variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdev-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdev-function.html)) calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html) calculates the sample standard deviation based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevp-function.html) calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html) calculates the population deviation based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/var-function.html](https://docs.amazonaws.cn/quicksight/latest/user/var-function.html)) calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html) calculates the sample variance based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varp-function.html)) calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html) calculates the population variance based on a conditional statement.
+ [https://docs.amazonaws.cn/quicksight/latest/user/sum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sum-function.html)) adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html)) calculates the sum based on a conditional statement.

## Conditional functions


The conditional functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/coalesce-function.html](https://docs.amazonaws.cn/quicksight/latest/user/coalesce-function.html) returns the value of the first argument that is not null.
+ [https://docs.amazonaws.cn/quicksight/latest/user/ifelse-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ifelse-function.html) evaluates a set of *if*, *then* expression pairings, and returns the value of the *then* argument for the first *if* argument that evaluates to true.
+ [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) evaluates an expression to see if it is in a given list of values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/isNotNull-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isNotNull-function.html) evaluates an expression to see if it is not null.
+ [https://docs.amazonaws.cn/quicksight/latest/user/isNull-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isNull-function.html) evaluates an expression to see if it is null. If the expression is null, `isNull` returns true, and otherwise it returns false.
+ [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) evaluates an expression to see if it is not in a given list of values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/nullIf-function.html](https://docs.amazonaws.cn/quicksight/latest/user/nullIf-function.html) compares two expressions. If they are equal, the function returns null. If they are not equal, the function returns the first expression.
+ [https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html](https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html) returns an expression that matches the first label equal to the condition expression.

## Date functions


The date functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/addDateTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/addDateTime-function.html) adds or subtracts a unit of time to the date or time provided.
+ [https://docs.amazonaws.cn/quicksight/latest/user/addWorkDays-function.html](https://docs.amazonaws.cn/quicksight/latest/user/addWorkDays-function.html) adds or subtracts the given number of work days to the date or time provided.
+ [https://docs.amazonaws.cn/quicksight/latest/user/dateDiff-function.html](https://docs.amazonaws.cn/quicksight/latest/user/dateDiff-function.html) returns the difference in days between two date fields. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/epochDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/epochDate-function.html) converts an epoch date into a standard date. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/extract-function.html](https://docs.amazonaws.cn/quicksight/latest/user/extract-function.html) returns a specified portion of a date value. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/formatDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/formatDate-function.html) formats a date using a pattern you specify. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/isWorkDay-function.html](https://docs.amazonaws.cn/quicksight/latest/user/isWorkDay-function.html) returns TRUE if a given date-time value is a work or business day.
+ [https://docs.amazonaws.cn/quicksight/latest/user/netWorkDays-function.html](https://docs.amazonaws.cn/quicksight/latest/user/netWorkDays-function.html) returns the number of working days between the provided two date values.
+ [https://docs.amazonaws.cn/quicksight/latest/user/now-function.html](https://docs.amazonaws.cn/quicksight/latest/user/now-function.html) returns the current date and time, using either settings for a database, or UTC for file and Salesforce. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/truncDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/truncDate-function.html) returns a date value that represents a specified portion of a date. 

## Numeric functions


The numeric functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/ceil-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ceil-function.html) rounds a decimal value to the next highest integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/decimalToInt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/decimalToInt-function.html) converts a decimal value to an integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/floor-function.html](https://docs.amazonaws.cn/quicksight/latest/user/floor-function.html) decrements a decimal value to the next lowest integer. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/intToDecimal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/intToDecimal-function.html) converts an integer value to a decimal. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/round-function.html](https://docs.amazonaws.cn/quicksight/latest/user/round-function.html) rounds a decimal value to the closest integer or, if scale is specified, to the closest decimal place. 

## Mathematical functions


The mathematical functions for calculated fields in Amazon Quick include the following: 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html](https://docs.amazonaws.cn/quicksight/latest/user/mod-function.html)(number, divisor)` – Finds the remainder after dividing a number by a divisor.
+ `[https://docs.amazonaws.cn/quicksight/latest/user/log-function.html](https://docs.amazonaws.cn/quicksight/latest/user/log-function.html)(expression) `– Returns the base 10 logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ln-function.html)(expression) `– Returns the natural logarithm of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html](https://docs.amazonaws.cn/quicksight/latest/user/abs-function.html)(expression) `– Returns the absolute value of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sqrt-function.html)(expression) `– Returns the square root of a given expression. 
+ `[https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html](https://docs.amazonaws.cn/quicksight/latest/user/exp-function.html)(expression) `– Returns the base of natural log *e* raised to the power of a given expression. 

## String functions


The string (text) functions for calculated fields in Amazon Quick include the following:
+ [https://docs.amazonaws.cn/quicksight/latest/user/concat-function.html](https://docs.amazonaws.cn/quicksight/latest/user/concat-function.html) concatenates two or more strings. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/contains-function.html](https://docs.amazonaws.cn/quicksight/latest/user/contains-function.html) checks if an expression contains a substring. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/endsWith-function.html](https://docs.amazonaws.cn/quicksight/latest/user/endsWith-function.html) checks if the expression ends with the substring specified.
+ [https://docs.amazonaws.cn/quicksight/latest/user/left-function.html](https://docs.amazonaws.cn/quicksight/latest/user/left-function.html) returns the specified number of leftmost characters from a string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/locate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/locate-function.html) locates a substring within another string, and returns the number of characters before the substring. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/ltrim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/ltrim-function.html) removes preceding blank space from a string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseDate-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseDate-function.html) parses a string to determine if it contains a date value, and returns the date if found. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseDecimal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseDecimal-function.html) parses a string to determine if it contains a decimal value. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseInt-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseInt-function.html) parses a string to determine if it contains an integer value.
+ [https://docs.amazonaws.cn/quicksight/latest/user/parseJson-function.html](https://docs.amazonaws.cn/quicksight/latest/user/parseJson-function.html) parses values from a native JSON or from a JSON object in a text field.
+ [https://docs.amazonaws.cn/quicksight/latest/user/replace-function.html](https://docs.amazonaws.cn/quicksight/latest/user/replace-function.html) replaces part of a string with a new string. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/right-function.html](https://docs.amazonaws.cn/quicksight/latest/user/right-function.html) returns the specified number of rightmost characters from a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/rtrim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/rtrim-function.html) removes following blank space from a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/split-function.html](https://docs.amazonaws.cn/quicksight/latest/user/split-function.html) splits a string into an array of substrings, based on a delimiter that you choose, and returns the item specified by the position. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/startsWith-function.html](https://docs.amazonaws.cn/quicksight/latest/user/startsWith-function.html) checks if the expression starts with the substring specified.
+ [https://docs.amazonaws.cn/quicksight/latest/user/strlen-function.html](https://docs.amazonaws.cn/quicksight/latest/user/strlen-function.html) returns the number of characters in a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/substring-function.html](https://docs.amazonaws.cn/quicksight/latest/user/substring-function.html) returns the specified number of characters in a string, starting at the specified location. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/toLower-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toLower-function.html) formats a string in all lowercase.
+ [https://docs.amazonaws.cn/quicksight/latest/user/toString-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toString-function.html) formats the input expression as a string.
+ [https://docs.amazonaws.cn/quicksight/latest/user/toUpper-function.html](https://docs.amazonaws.cn/quicksight/latest/user/toUpper-function.html) formats a string in all uppercase.
+ [https://docs.amazonaws.cn/quicksight/latest/user/trim-function.html](https://docs.amazonaws.cn/quicksight/latest/user/trim-function.html) removes both preceding and following blank space from a string.

## Table calculations


Table calculations form a group of functions that provide context in an analysis. They provide support for enriched aggregated analysis. By using these calculations, you can address common business scenarios such as calculating percentage of total, running sum, difference, common baseline, and rank. 

When you are analyzing data in a specific visual, you can apply table calculations to the current set of data to discover how dimensions influence measures or each other. Visualized data is your result set based on your current dataset, with all the filters, field selections, and customizations applied. To see exactly what this result set is, you can export your visual to a file. A table calculation function performs operations on the data to reveal relationships between fields. 

**Lookup-based functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/difference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/difference-function.html) calculates the difference between a measure based on one set of partitions and sorts, and a measure based on another. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lag-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lag-function.html) calculates the lag (previous) value for a measure. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lead-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lead-function.html) calculates the lead (following) value for a measure. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentDifference-function.html) calculates the percentage difference between the current value and a comparison value.

**Over functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/avgOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/avgOver-function.html) calculates the average of a measure over one or more dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/countOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/countOver-function.html) calculates the count of a field over one or more dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/distinctCountOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/distinctCountOver-function.html) calculates the distinct count of the operand partitioned by the specified attributes at a specified level. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/maxOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/maxOver-function.html) calculates the maximum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/minOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/minOver-function.html) the minimum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileOver-function.html) (alias of `percentileDiscOver`) calculates the *n*th percentile of a measure partitioned by a list of dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html) calculates the *n*th percentile based on a continuous distribution of the numbers of a measure partitioned by a list of dimensions.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html) calculates the *n*th percentile based on the actual numbers of a measure partitioned by a list of dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentOfTotal-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentOfTotal-function.html) calculates the percentage that a measure contributes to the total. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodDifference-function.html) calculates the difference of a measure over two different time periods as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodLastValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodLastValue-function.html) calculates the last (previous) value of a measure from a previous time period as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodPercentDifference-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodOverPeriodPercentDifference-function.html) calculates the percent difference of a measure over two different time periods as specified by period granularity and offset.
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvgOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateAvgOverTime-function.html) calculates the average of a measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCountOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateCountOverTime-function.html) calculates the count of a dimension or measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMaxOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMaxOverTime-function.html) calculates the maximum of a measure or date for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMinOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateMinOverTime-function.html) calculates the minimum of a measure or date for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSumOverTime-function.html](https://docs.amazonaws.cn/quicksight/latest/user/periodToDateSumOverTime-function.html) calculates the sum of a measure for a given time granularity (for instance, a quarter) up to a point in time. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/sumOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/sumOver-function.html) calculates the sum of a measure over one or more dimensions. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevOver-function.html) calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a sample.
+ [https://docs.amazonaws.cn/quicksight/latest/user/stdevpOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/stdevpOver-function.html) calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population.
+ [https://docs.amazonaws.cn/quicksight/latest/user/varOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varOver-function.html) calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/varpOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/varpOver-function.html) calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population. 

**Ranking functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/rank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/rank-function.html) calculates the rank of a measure or a dimension.
+ [https://docs.amazonaws.cn/quicksight/latest/user/denseRank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/denseRank-function.html) calculates the rank of a measure or a dimension, ignoring duplicates.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileRank-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileRank-function.html) calculates the rank of a measure or a dimension, based on percentile.

**Running functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningAvg-function.html) calculates a running average for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningCount-function.html) calculates a running count for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningMax-function.html) calculates a running maximum for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningMin-function.html) calculates a running minimum for a measure.
+ [https://docs.amazonaws.cn/quicksight/latest/user/runningSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/runningSum-function.html) calculates a running sum for a measure. 

**Window functions**
+ [https://docs.amazonaws.cn/quicksight/latest/user/firstValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/firstValue-function.html) calculates the first value of the aggregated measure or dimension partitioned and sorted by specified attributes. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/lastValue-function.html](https://docs.amazonaws.cn/quicksight/latest/user/lastValue-function.html) calculates the last value of the aggregated measure or dimension partitioned and sorted by specified attributes. 
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowAvg-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowAvg-function.html) calculates the average of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowCount-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowCount-function.html) calculates the count of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowMax-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowMax-function.html) calculates the maximum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowMin-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowMin-function.html) calculates the minimum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.
+ [https://docs.amazonaws.cn/quicksight/latest/user/windowSum-function.html](https://docs.amazonaws.cn/quicksight/latest/user/windowSum-function.html) calculates the sum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes.

# Functions


In this section, you can find a list of functions available in Amazon Quick. To view a list of functions sorted by category, with brief definitions, see [Functions by category](https://docs.amazonaws.cn/quicksight/latest/user/functions-by-category.html).

**Topics**
+ [

# addDateTime
](addDateTime-function.md)
+ [

# addWorkDays
](addWorkDays-function.md)
+ [

# Abs
](abs-function.md)
+ [

# Ceil
](ceil-function.md)
+ [

# Coalesce
](coalesce-function.md)
+ [

# Concat
](concat-function.md)
+ [

# contains
](contains-function.md)
+ [

# decimalToInt
](decimalToInt-function.md)
+ [

# dateDiff
](dateDiff-function.md)
+ [

# endsWith
](endsWith-function.md)
+ [

# epochDate
](epochDate-function.md)
+ [

# Exp
](exp-function.md)
+ [

# Extract
](extract-function.md)
+ [

# Floor
](floor-function.md)
+ [

# formatDate
](formatDate-function.md)
+ [

# Ifelse
](ifelse-function.md)
+ [

# in
](in-function.md)
+ [

# intToDecimal
](intToDecimal-function.md)
+ [

# isNotNull
](isNotNull-function.md)
+ [

# isNull
](isNull-function.md)
+ [

# isWorkDay
](isWorkDay-function.md)
+ [

# Left
](left-function.md)
+ [

# Locate
](locate-function.md)
+ [

# Log
](log-function.md)
+ [

# Ln
](ln-function.md)
+ [

# Ltrim
](ltrim-function.md)
+ [

# Mod
](mod-function.md)
+ [

# netWorkDays
](netWorkDays-function.md)
+ [

# Now
](now-function.md)
+ [

# notIn
](notIn-function.md)
+ [

# nullIf
](nullIf-function.md)
+ [

# parseDate
](parseDate-function.md)
+ [

# parseDecimal
](parseDecimal-function.md)
+ [

# parseInt
](parseInt-function.md)
+ [

# parseJson
](parseJson-function.md)
+ [

# Replace
](replace-function.md)
+ [

# Right
](right-function.md)
+ [

# Round
](round-function.md)
+ [

# Rtrim
](rtrim-function.md)
+ [

# Split
](split-function.md)
+ [

# Sqrt
](sqrt-function.md)
+ [

# startsWith
](startsWith-function.md)
+ [

# Strlen
](strlen-function.md)
+ [

# Substring
](substring-function.md)
+ [

# switch
](switch-function.md)
+ [

# toLower
](toLower-function.md)
+ [

# toString
](toString-function.md)
+ [

# toUpper
](toUpper-function.md)
+ [

# trim
](trim-function.md)
+ [

# truncDate
](truncDate-function.md)

# addDateTime


`addDateTime` adds or subtracts a unit of time from a datetime value. For example, `addDateTime(2,'YYYY',parseDate('02-JUL-2018', 'dd-MMM-yyyy') )` returns `02-JUL-2020`. You can use this function to perform date math on your date and time data. 

## Syntax


```
addDateTime(amount, period, datetime)
```

## Arguments


 *amount*   
A positive or negative integer value that represents the amount of time that you want to add or subtract from the provided datetime field. 

 *period*   
A positive or negative value that represents the amount of time that you want to add or subtract from the provided datetime field. Valid periods are as follows:   
+ YYYY: This returns the year portion of the date. 
+ Q: This returns the quarter that the date belongs to (1–4). 
+ MM: This returns the month portion of the date. 
+ DD: This returns the day portion of the date. 
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick. 
+ HH: This returns the hour portion of the date. 
+ MI: This returns the minute portion of the date. 
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

 *datetime*   
The date or time that you want to perform date math on. 

## Return type


Datetime

## Example


Let's say you have a field called `purchase_date` that has the following values.

```
2018 May 13 13:24
2017 Jan 31 23:06
2016 Dec 28 06:45
```

Using the following calculations, `addDateTime` modifies the values as shown following.

```
addDateTime(-2, 'YYYY', purchaseDate)

2016 May 13 13:24
2015 Jan 31 23:06
2014 Dec 28 06:45


addDateTime(4, 'DD', purchaseDate)

2018 May 17 13:24
2017 Feb 4 23:06
2017 Jan 1 06:45


addDateTime(20, 'MI', purchaseDate)

2018 May 13 13:44
2017 Jan 31 23:26
2016 Dec 28 07:05
```

# addWorkDays


`addWorkDays` Adds or subtracts a designated number of work days to a given date value. The function returns a date for a work day, that falls a designated work days after or before a given input date value. 

## Syntax


```
addWorkDays(initDate, numWorkDays)
```

## Arguments


*initDate*  
A valid non-NULL date that acts as the start date for the calculation.   
+ **Dataset field** – Any `date` field from the dataset that you are adding this function to.
+ **Date function** – Any date output from another `date` function, for example `parseDate`, `epochDate`, `addDateTime`., and so on.  
**Example**  

  ```
  addWorkDays(epochDate(1659484800), numWorkDays)
  ```
+ **Calculated fields** – Any Quick calculated field that returns a `date` value.  
**Example**  

  ```
  calcFieldStartDate = addDateTime(10, “DD”, startDate)
  addWorkDays(calcFieldStartDate, numWorkDays)
  ```
+ **Parameters** – Any Quick `datetime` parameter.  
**Example**  

  ```
  addWorkDays($paramStartDate, numWorkDays)
  ```
+ Any combination of the above stated argument values.

 *numWorkDays*   
A non-NULL integer that acts as the end date for the calculation.   
+ **Literal** – An integer literal directly typed in the expression editor.  
**Example**  

  ```
  ```
+ **Dataset field** – Any date field from the dataset   
**Example**  

  ```
  ```
+ **Scalar function or calculation** – Any scalar Quick function that returns an integer output from another, for example `decimalToInt`, `abs`, and so on.  
**Example**  

  ```
  addWorkDays(initDate, decimalToInt(sqrt (abs(numWorkDays)) ) )
  ```
+ **Calculated field** – Any Quick calculated field that returns a `date` value.  
**Example**  

  ```
  someOtherIntegerCalcField = (num_days * 2) + 12
  addWorkDays(initDate, someOtherIntegerCalcField)
  ```
+ **Parameter** – Any Quick `datetime` parameter.  
**Example**  

  ```
  addWorkDays(initDate, $param_numWorkDays)
  ```
+ Any combination of the above stated argument values.

## Return type


Integer 

## Ouptut values


Expected output values include:
+ Positive integer (when start\$1date < end\$1date)
+ Negative integer (when start\$1date > end\$1date)
+ NULL when one or both of the arguments get a null value from the `dataset field`.

## Input errors


Disallowed argument values cause errors, as shown in the following examples.
+ Using a literal NULL as an argument in the expression is disallowed.  
**Example**  

  ```
  addWorkDays(NULL, numWorkDays) 
  ```  
**Example**  

  ```
  Error
  At least one of the arguments in this function does not have correct type. 
  Correct the expression and choose Create again.
  ```
+ Using a string literal as an argument, or any other data type other than a date, in the expression is disallowed. In the following example, the string **"2022-08-10"** looks like a date, but it is actually a string. To use it, you would have to use a function that converts to a date data type.  
**Example**  

  ```
  addWorkDays("2022-08-10", 10)
  ```  
**Example**  

  ```
  Error
  Expression addWorkDays("2022-08-10", numWorkDays) for function addWorkDays has 
  incorrect argument type addWorkDays(String, Number). 
  Function syntax expects Date, Integer.
  ```

## Example


A positive integer as `numWorkDays` argument will yield a date in the future of the input date. A negative integer as `numWorkDays` argument will yield a resultant date in the past of the input date. A zero value for the `numWorkDays` argument yields the same value as input date whether or not it falls on a work day or a weekend.

The `addWorkDays` function operates at the granularity: `DAY`. Accuracy cannot be preserved at any granularity which is lower or higher than `DAY` level.

```
addWorkDays(startDate, endDate)
```

Let’s assume there is a field named `employmentStartDate` with the following values: 

```
2022-08-10 2022-08-06 2022-08-07 
```

Using the above field and following calculations, `addWorkDays` returns the modified values as shown below:

```
addWorkDays(employmentStartDate, 7)

2022-08-19 
2022-08-16 
2022-08-16 

addWorkDays(employmentStartDate, -5)

2022-08-02 
2022-08-01 
2022-08-03 

addWorkDays(employmentStartDate, 0)

2022-08-10 
2022-08-06 
2022-08-07
```

The following example calculates the total pro-rated bonus to be paid to each employee for 2 years based on how many days each employee has actually worked.

```
last_day_of_work = addWorkDays(employment_start_date, 730)
total_days_worked = netWorkDays(employment_start_date, last_day_of_work)
total_bonus = total_days_worked * bonus_per_day
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/addWorkDays-function-example.png)


# Abs


`abs` returns the absolute value of a given expression. 

## Syntax


```
abs(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ceil


`ceil` rounds a decimal value to the next highest integer. For example, `ceil(29.02)` returns `30`.

## Syntax


```
ceil(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example rounds a decimal field to the next highest integer.

```
ceil(salesAmount)
```

The following are the given field values.

```
20.13
892.03
57.54
```

For these field values, the following values are returned.

```
21
893
58
```

# Coalesce


`coalesce` returns the value of the first argument that is not null. When a non-null value is found, the remaining arguments in the list are not evaluated. If all arguments are null, the result is null. 0-length strings are valid values and are not considered equivalent to null.

## Syntax


```
coalesce(expression1, expression2 [, expression3, ...])
```

## Arguments


`coalesce` takes two or more expressions as arguments. All of the expressions must have the same data type or be able to be implicitly cast to the same data type.

 *expression*   
The expression can be numeric, datetime, or string. It can be a field name, a literal value, or another function. 

## Return type


`coalesce` returns a value of the same data type as the input arguments.

## Example


The following example retrieves a customer's billing address if it exists, her street address if there is no billing address, or returns "No address listed" if neither address is available.

```
coalesce(billingAddress, streetAddress, 'No address listed')
```

# Concat
Concat

`concat` concatenates two or more strings.

## Syntax


```
concat(expression1, expression2 [, expression3 ...])
```

## Arguments


`concat` takes two or more string expressions as arguments. 

 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Examples


The following example concatenates three string fields and adds appropriate spacing.

```
concat(salutation, ' ', firstName, ' ', lastName)
```

The following are the given field values.

```
salutation     firstName          lastName
-------------------------------------------------------
Ms.            Li                  Juan
Dr.            Ana Carolina        Silva
Mr.            Nikhil              Jayashankar
```

For these field values, the following values are returned.

```
Ms. Li Juan
Dr. Ana Carolina Silva
Mr. Nikhil Jayashankar
```

The following example concatenates two string literals.

```
concat('Hello', 'world')
```

The following value is returned.

```
Helloworld
```

# contains
contains

`contains` evaluates if the substring that you specify exists within an expression. If the expression contains the substring, contains returns true, and otherwise it returns false.

## Syntax


```
contains(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` contains **New**.

```
contains(state_nm, "New")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` contains **new**.

```
contains(state_nm, "new", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The contains function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if `state_nm` contains **New**.

```
sumIf(Sales,contains(state_nm, "New"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not contain the specified substring. 

```
NOT(contains(state_nm, "New"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
contains(state_nm, toString(5) )
```

# decimalToInt


`decimalToInt` converts a decimal value to the integer data type by stripping off the decimal point and any numbers after it. `decimalToInt` does not round up. For example, `decimalToInt(29.99)` returns `29`.

## Syntax


```
decimalToInt(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example converts a decimal field to an integer.

```
decimalToInt(salesAmount)
```

The following are the given field values.

```
 20.13
892.03
 57.54
```

For these field values, the following values are returned.

```
 20
892
 57
```

# dateDiff


`dateDiff` returns the difference in days between two date fields. If you include a value for the period, `dateDiff` returns the difference in the period interval, rather than in days.

## Syntax


```
dateDiff(date1, date2,[period])
```

## Arguments


`dateDiff` takes two dates as arguments. Specifying a period is optional.

 *date 1*   
The first date in the comparison. A date field or a call to another function that outputs a date. 

 *date 2*   
The second date in the comparison. A date field or a call to another function that outputs a date. 

 *period*   
The period of difference that you want returned, enclosed in quotes. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the date of the first day of the quarter that the date belongs to. 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

## Return type


Integer

## Example


The following example returns the difference between two dates.

```
dateDiff(orderDate, shipDate, "MM")
```

The following are the given field values.

```
orderDate          shipdate
=============================
01/01/18            03/05/18
09/13/17            10/20/17
```

For these field values, the following values are returned.

```
2
1
```

# endsWith
endsWith

`endsWith` evaluates if the expression ends with a substring that you specify. If the expression ends with the substring, `endsWith` returns true, and otherwise it returns false.

## Syntax


```
endsWith(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` endsWith **"York"**.

```
endsWith(state_nm, "York")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` endsWith **"york"**.

```
endsWith(state_nm, "york", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The `endsWith` function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if `state_nm` ends with **"York"**.

```
sumIf(Sales,endsWith(state_nm, "York"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not start with the specified substring. 

```
NOT(endsWith(state_nm, "York"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
endsWith(state_nm, toString(5) )
```

# epochDate


`epochDate` converts an epoch date into a standard date in the format yyyy-MM-dd**T**kk:mm:ss.SSS**Z**, using the format pattern syntax specified in [Class DateTimeFormat](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html) in the Joda project documentation. An example is `2015-10-15T19:11:51.003Z`. 

`epochDate` is supported for use with analyses based on datasets stored in Quick (SPICE).

## Syntax


```
epochDate(epochdate)
```

## Arguments


 *epochdate*   
An epoch date, which is an integer representation of a date as the number of seconds since 00:00:00 UTC on January 1, 1970.   
*epochdate* must be an integer. It can be the name of a field that uses the integer data type, a literal integer value, or a call to another function that outputs an integer. If the integer value is longer than 10 digits, the digits after the 10th place are discarded.

## Return type


Date

## Example


The following example converts an epoch date to a standard date.

```
epochDate(3100768000)
```

The following value is returned.

```
2068-04-04T12:26:40.000Z
```

# Exp


`exp` returns the base of natural log e raised to the power of a given expression. 

## Syntax


```
exp(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Extract


`extract` returns a specified portion of a date value. Requesting a time-related portion of a date that doesn't contain time information returns 0.

## Syntax


```
extract(period, date)
```

## Arguments


 *period*   
The period that you want extracted from the date value. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the quarter that the date belongs to (1–4). 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WD: This returns the day of the week as an integer, with Sunday as 1.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.
**Note**  
Extracting milliseconds is not supported in Presto databases below version 0.216.

 *date*   
A date field or a call to another function that outputs a date.

## Return type


Integer

## Example


The following example extracts the day from a date value.

```
extract('DD', orderDate)
```

The following are the given field values.

```
orderDate
=========
01/01/14  
09/13/16
```

For these field values, the following values are returned.

```
01
13
```

# Floor


`floor` decrements a decimal value to the next lowest integer. For example, `floor(29.08)` returns `29`.

## Syntax


```
floor(decimal)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

## Return type


Integer

## Example


The following example decrements a decimal field to the next lowest integer.

```
floor(salesAmount)
```

The following are the given field values.

```
20.13
892.03
57.54
```

For these field values, the following values are returned.

```
20
892
57
```

# formatDate


`formatDate` formats a date using a pattern you specify. When you are preparing data, you can use `formatDate` to reformat the date. To reformat a date in an analysis, you choose the format option from the context menu on the date field.

## Syntax


```
formatDate(date, ['format'])
```

## Arguments


 *date*   
A date field or a call to another function that outputs a date.

 *format*   
(Optional) A string containing the format pattern to apply. This argument accepts the format patterns specified in [Supported date formats](https://docs.amazonaws.cn/quicksight/latest/user/supported-date-formats.html).  
If you don't specify a format, this string defaults to yyyy-MM-dd**T**kk:mm:ss:SSS.

## Return type


String

## Example


The following example formats a UTC date.

```
formatDate(orderDate, 'dd-MMM-yyyy')
```

The following are the given field values.

```
order date      
=========
2012-12-14T00:00:00.000Z  
2013-12-29T00:00:00.000Z
2012-11-15T00:00:00.000Z
```

For these field values, the following values are returned.

```
13 Dec 2012
28 Dec 2013
14 Nov 2012
```

## Example


If the date contains single quotes or apostrophes, for example `yyyyMMdd'T'HHmmss`, you can handle this date format by using one of the following methods.
+ Enclose the entire date in double quotes, as shown in the following example:

  ```
  formatDate({myDateField}, "yyyyMMdd'T'HHmmss")
  ```
+ Escape the single quotes or apostrophes by adding a backslash ( `\` ) to the left of them, as shown in the following example: 

  ```
  formatDate({myDateField}, 'yyyyMMdd\'T\'HHmmss')
  ```

# Ifelse


`ifelse` evaluates a set of *if*, *then* expression pairings, and returns the value of the *then* argument for the first *if* argument that evaluates to true. If none of the *if* arguments evaluate to true, then the value of the *else* argument is returned.

## Syntax


```
ifelse(if-expression-1, then-expression-1 [, if-expression-n, then-expression-n ...], else-expression)
```

## Arguments


`ifelse` requires one or more *if*,*then* expression pairings, and requires exactly one expression for the *else* argument. 

 *if-expression*   
The expression to be evaluated as true or not. It can be a field name like **address1**, a literal value like **'Unknown'**, or another function like `toString(salesAmount)`. An example is `isNotNull(FieldName)`.   
If you use multiple AND and OR operators in the `if` argument, enclose statements in parentheses to identify processing order. For example, the following `if` argument returns records with a month of 1, 2, or 5 and a year of 2000.  

```
ifelse((month = 5 OR month < 3) AND year = 2000, 'yes', 'no')
```
The next `if` argument uses the same operators, but returns records with a month of 5 and any year, or with a month of 1 or 2 and a year of 2000.  

```
ifelse(month = 5 OR (month < 3 AND year = 2000), 'yes', 'no')
```

 *then-expression*   
The expression to return if its *if* argument is evaluated as true. It can be a field name like **address1**, a literal value like **'Unknown'**, or a call to another function. The expression must have the same data type as the other `then` arguments and the `else` argument. 

 *else-expression*   
The expression to return if none of the *if* arguments evaluate as true. It can be a field name like **address1**, a literal value like **'Unknown'**, or another function like `toString(salesAmount)`. The expression must have the same data type as all of the `then` arguments. 

## Return type


`ifelse` returns a value of the same data type as the values in *then-expression*. All data returned *then* and *else* expressions must be of the same data type or be converted to the same data type. 

## Examples


The following example generates a column of aliases for field `country`.

```
ifelse(country = "United States", "US", country = "China", "CN", country = "India", "IN", "Others") 
```

For such use cases evaluating each value in a field against a list of literals, and returns the result corresponding to the first matching value., function switch is recommended to simplify your work. The previous example can be rewritten to the following statement using [https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html](https://docs.amazonaws.cn/quicksight/latest/user/switch-function.html):

```
switch(country,"United States","US","China","CN","India","IN","Others")
```

The following example categorizes sales per customer into human-readable levels.

```
ifelse(salesPerCustomer < 1000, “VERY_LOW”, salesPerCustomer < 10000, “LOW”, salesPerCustomer < 100000, “MEDIUM”, “HIGH”)
```

The following example uses AND, OR, and NOT to compare multiple expressions using conditional operators to tag top customers NOT in Washington or Oregon with a special promotion, who made more than 10 orders. If no values are returned, the value `'n/a'` is used.

```
ifelse(( (NOT (State = 'WA' OR State =  'OR')) AND Orders > 10),  'Special Promotion XYZ',  'n/a')
```

The following examples use only OR to generate a new column that contains the name of continent that corresponds to each `country`.

```
ifelse(country = "United States" OR country = "Canada", "North America", country = "China" OR country = "India" OR country = "Japan", "Asia", "Others")
```

The previous example can be simplified as shown in the next example. The following example uses `ifelse` and [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) to create a value in a new column for any row where the tested value is in a literal list. You could use `ifelse` with [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) as well.

```
ifelse(in(country,["United States", "Canada"]), "North America", in(country,["China","Japan","India"]),"Asia","Others")
```

Authors are able to save a literal list in a multivalue parameter and use it in the [https://docs.amazonaws.cn/quicksight/latest/user/in-function.html](https://docs.amazonaws.cn/quicksight/latest/user/in-function.html) or [https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html](https://docs.amazonaws.cn/quicksight/latest/user/notIn-function.html) functions. The following example is an equivalent of the previous example, except that the literal lists are stored in two multivalue parameters. 

```
ifelse(in(country,${NorthAmericaCountryParam}), "North America", in(country,${AsiaCountryParam}),"Asia", "Others") 
```

The following example assigns a group to a sales record based on the sales total. The structure of each `if-then` phrase mimics the behavior of *between*, a keyword that doesn't currently work in calculated field expressions. For example, the result of the comparison `salesTotal >= 0 AND salesTotal < 500` returns the same values as the SQL comparison `salesTotal between 0 and 499`.

```
ifelse(salesTotal >= 0 AND salesTotal < 500, 'Group 1', salesTotal >= 500 AND salesTotal < 1000, 'Group 2', 'Group 3')
```

The following example tests for a NULL value by using `coalesce` to return the first non-NULL value. Instead of needing to remember the meaning of a NULL in a date field, you can use a readable description instead. If the disconnect date is NULL, the example returns the suspend date, unless both of those are NULL. Then `coalesce(DiscoDate, SuspendDate, '12/31/2491')` returns `'12/31/2491'`. The return value must match the other data types. This date might seem like an unusual value, but a date in the 25th century reasonably simulates the "end of time," defined as the highest date in a data mart. 

```
ifelse (  (coalesce(DiscoDate, SuspendDate, '12/31/2491') = '12/31/2491'),  'Active subscriber', 'Inactive subscriber')
```

The following shows a more complex example in a more readable format, just to show that you don't need to compress your code all into one long line. This example provides for multiple comparisons of the value a survey result. It handles potential NULL values for this field and categorizes two acceptable ranges. It also labels one range that needs more testing and another that's not valid (out of range). For all remaining values, it applies the `else` condition, and labels the row as needing a retest three years after the date on that row. 

```
ifelse
( 
    isNull({SurveyResult}), 'Untested',  
    {SurveyResult}=1, 'Range 1', 
    {SurveyResult}=2, 'Range 2', 
    {SurveyResult}=3, 'Need more testing',
    {SurveyResult}=99, 'Out of Range',
    concat  
    (
        'Retest by ', 
        toString    
        (
           addDateTime(3, "YYYY", {Date}) 
        )
    )
)
```

The following example assigns a "manually" created region name to a group of states. It also uses spacing and comments, wrapped in `/* */`, to make it easier to maintain the code. 

```
ifelse 
(    /* NE REGION*/
     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
    'Northeast',

     /* SE REGION*/
     locate('Georgia, Alabama, South Carolina, Louisiana',{State}) > 0,
    'Southeast',

    'Other Region'
)
```

The logic for the region tagging breaks down as follows:

1. We list the states that we want for each region, enclosing each list in quotation marks to make each list a string, as follows: 
   + `'New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire'`
   + `'Georgia, Alabama, South Carolina, Louisiana'`
   + You can add more sets, or use countries, cities, provinces, or What3Words if you want. 

1. We ask if the value for `State` (for each row) is found in the list, by using the `locate` function to return a nonzero value if the state is found in the list, as follows.

   ```
   locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) 
   
   and
   
   locate('Georgia, Alabama, South Carolina, Louisiana',{State})
   ```

1. The `locate` function returns a number instead of a `TRUE` or `FALSE`, but `ifelse` requires the `TRUE`/`FALSE` Boolean value. To get around this, we can compare the result of `locate` to a number. If the state is in the list, the return value is greater than zero.

   1. Ask if the state is present.

      ```
      locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0
      ```

   1. If it's present the region, label it as the specific region, in this case a Northeast region.

      ```
      /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
      /*The then expression:*/   'Northeast',
      ```

1. Because we have states that aren't in a list, and because `ifelse` requires a single `else` expression, we provide `'Other Region'` as the label for the leftover states. 

   ```
   /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
   /*The then expression:*/   'Northeast',
   /*The else expression:*/   'Other Region'
   ```

1. We wrap all that in the `ifelse( )` function to get the final version. The following example leaves out the Southeast region states that were in the original. You can add them back in place of the *`<insert more regions here>`* tag. 

   If you want to add more regions, you can construct more copies of those two lines and alter the list of states to suit your purpose. You can change the region name to something that suits you, and change the field name from `State` to anything that you need. 

   ```
   ifelse 
   (
   /*The if expression:*/     locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) > 0,
   /*The then expression:*/   'Northeast',
   
   /*<insert more regions here>*/
   
   /*The else expression:*/   'Other Region'
   )
   ```
**Note**  
There are other ways to do the initial comparison for the if expression. For example, suppose that you pose the question "What states are not missing from this list?" rather than "Which states are on the list?" If you do, you might phrase it differently. You might compare the locate statement to zero to find values that are missing from the list, and then use the NOT operator to classify them as "not missing," as follows.  

   ```
   /*The if expression:*/      NOT (locate('New York, New Jersey, Connecticut, Vermont, Maine, Rhode Island, New Hampshire',{State}) = 0),
   ```
Both versions are correct. The version that you choose should make the most sense to you and your team, so you can maintain it easily. If all the options seem equal, choose the simplest.

# in
in

`in` evaluates if an expression exists within a literal list. If the list contains the expression, in returns true, and otherwise it returns false. `in` is case sensitive for string type inputs.

`in` accepts two kinds of literal list, one is manually entered list and the other is a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html).

## Syntax


Using a manually entered list:

```
in(expression, [literal-1, ...])  
```

Using a multivalue parameter:

```
in(expression, $multivalue_parameter)
```

## Arguments


 *expression*   
The expression to be compared with the elements in literal list. It can be a field name like `address`, a literal value like ‘ **Unknown**’, a single value parameter, or a call to another scalar function—provided this function is not an aggregate function or a table calculation.

 *literal list*   
(required) This can be a manually entered list or a multivalue parameter. This argument accepts up to 5,000 elements. However, in a direct query to a third party data source, for example Oracle or Teradata, the restriction can be smaller.  
+ ***manually entered list*** – One or more literal values in a list to be compared with the expression. The list should be enclosed in square brackets. All the literals to compare must have the same datatype as the expression. 
+ ***multivalue parameter*** – A pre-defined multivalue parameter passed in as a literal list. The multivalue parameter must have the same datatype as the expression. 


## Return type


Boolean: TRUE/FALSE

## Example with a static list


The following example evaluates the `origin_state_name` field for values in a list of string. When comparing string type input, `in` only supports case sensitive comparison.

```
in(origin_state_name,["Georgia", "Ohio", "Texas"])
```

The following are the given field values.

```
"Washington"
        "ohio"
        "Texas"
```

For these field values the following values are returned.

```
false
        false
        true
```

The third return value is true because only "Texas" is one of the included values.

The following example evaluates the `fl_date` field for values in a list of string. In order to match the type, `toString` is used to cast the date type to string type.

```
in(toString(fl_date),["2015-05-14","2015-05-15","2015-05-16"])
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/in-function-example-manual-list.png)


Literals and NULL values are supported in expression argument to be compared with the literals in list. Both of the following two examples will generate a new column of TRUE values. 

```
in("Washington",["Washington","Ohio"])
```

```
in(NULL,[NULL,"Ohio"])
```

## Example with mutivalue parameter


Let's say an author creates a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html) that contains a list of all the state names. Then the author adds a control to allow the reader to select values from the list.

Next, the reader selects three values—"Georgia", "Ohio", and "Texas"—from the parameter's drop down list control. In this case, the following expression is equivalent to the first example, where those three state names are passed as the literal list to be compared with the `original_state_name` field. 

```
in (origin_state_name, ${stateName MultivalueParameter})
```

## Example with `ifelse`


`in` can be nested in other functions as a boolean value. One example is that authors can evaluate any expression in a list and return the value they want by using `in` and `ifelse`. The following example evaluates if the `dest_state_name` of a flight is in a particular list of US states and returns different categories of the states based on the comparison.

```
ifelse(in(dest_state_name,["Washington", "Oregon","California"]), "WestCoastUSState", "Other US State")
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/in-function-with-ifelse.png)


# intToDecimal


`intToDecimal` converts an integer value to the decimal data type.

## Syntax


```
intToDecimal(integer)
```

## Arguments


 *int*   
A field that uses the integer data type, a literal value like **14**, or a call to another function that outputs an integer.

## Return type


Decimal(Fixed) in the legacy data preparation experience.

Decimal(Float) in the new data preparation experience.

## Example


The following example converts an integer field to a decimal.

```
intToDecimal(price)
```

The following are the given field values.

```
20
892
57
```

For these field values, the following values are returned.

```
20.0
892.0
58.0
```

You can apply formatting inside an analysis, for example to format `price` as currency. 

# isNotNull


`isNotNull` evaluates an expression to see if it is not null. If the expression is not null, `isNotNull` returns true, and otherwise it returns false.

## Syntax


```
isNotNull(expression)
```

## Arguments


 *expression*   
The expression to be evaluated as null or not. It can be a field name like **address1** or a call to another function that outputs a string. 

## Return type


Boolean

## Example


The following example evaluates the sales\$1amount field for null values.

```
isNotNull(salesAmount)
```

The following are the given field values.

```
20.13
(null)
57.54
```

For these field values, the following values are returned.

```
true
false
true
```

# isNull


`isNull` evaluates an expression to see if it is null. If the expression is null, `isNull` returns true, and otherwise it returns false.

## Syntax


```
isNull(expression)
```

## Arguments


 *expression*   
The expression to be evaluated as null or not. It can be a field name like **address1** or a call to another function that outputs a string. 

## Return type


Boolean

## Example


The following example evaluates the sales\$1amount field for null values.

```
isNull(salesAmount)
```

The following are the given field values.

```
20.13
(null)
57.54
```

For these field values, the following values are returned.

```
false
true
false
```

The following example tests for a NULL value in an `ifelse` statement, and returns a human-readable value instead.

```
ifelse( isNull({ActiveFlag}) , 'Inactive',  'Active') 
```

# isWorkDay


`isWorkDay` evaluates a given date-time value to determine if the value is a workday or not.

`isWorkDay` assumes a standard 5-day work week starting from Monday and ending on Friday. Saturday and Sunday are assumed to be weekends. The function always calculates its result at the `DAY` granularity and is exclusive of the given input date.

## Syntax


```
isWorkDay(inputDate)
```

## Arguments


 *inputDate*   
The date-time value that you want to evaluate. Valid values are as follows:  
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.

## Return type


Integer (`0` or `1`)

## Example


The following exaple determines whether or not the `application_date` field is a work day.

Let's assume that there's a field named `application_date` with the following values:

```
2022-08-10 
2022-08-06 
2022-08-07
```

When you use these fields and add the following calculations, `isWorkDay` returns the below values:

```
isWorkDay({application_date})     
                                                     
1
0
0
```

The following example filters employees whose employment ends on a work day and determines whether their employment began on work day or a weekend using conditional formatting:

```
is_start_date_work_day = isWorkDay(employment_start_date)
is_end_date_work_day = isWorkDay(employment_end_date)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/isWorkDay-example.png)


# Left
Left

`left` returns the leftmost characters from a string, including spaces. You specify the number of characters to be returned. 

## Syntax


```
left(expression, limit)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *limit*   
The number of characters to be returned from *expression*, starting from the first character in the string.

## Return type


String

## Example


The following example returns the first 3 characters from a string.

```
left('Seattle Store #14', 3)
```

The following value is returned.

```
Sea
```

# Locate
Locate

`locate` locates a substring that you specify within another string, and returns the number of characters until the first character in the substring. The function returns 0 if it doesn't find the substring. The function is 1-based.

## Syntax


```
locate(expression, substring, start)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters in *expression* that you want to locate. The substring can occur one or more times in *expression*.

 *start*   
(Optional) If *substring* occurs more than once, use *start* to identify where in the string the function should start looking for the substring. For example, suppose that you want to find the second example of a substring and you think it typically occurs after the first 10 characters. You specify a *start* value of 10. It should start from 1.

## Return type


Integer

## Examples


The following example returns information about where the first occurrence of the substring 'and' appears in a string.

```
locate('1 and 2 and 3 and 4', 'and')
```

The following value is returned.

```
3
```

The following example returns information about where the first occurrence of the substring 'and' appears in a string after the fourth character.

```
locate('1 and 2 and 3 and 4', 'and', 4)
```

The following value is returned.

```
9
```

# Log


`log` returns the base 10 logarithm of a given expression.

## Syntax


```
log(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ln


`ln` returns the natural logarithm of a given expression. 

## Syntax


```
ln(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# Ltrim
Ltrim

`ltrim` removes preceding blank space from a string.

## Syntax


```
ltrim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the preceding spaces from a string.

```
ltrim('   Seattle Store #14')
```

The following value is returned.

```
Seattle Store #14
```

# Mod


Use the `mod` function to find the remainder after dividing the number by the divisor. You can use the `mod` function or the modulo operator (%) interchangeably.

## Syntax


```
mod(number, divisor)
```

```
number%divisor
```

## Arguments


 *number*   
The number is the positive integer that you want to divide and find the remainder for. 

 *divisor*   
The divisor is the positive integer that you are dividing by. If the divisor is zero, this function returns an error on dividing by 0.

## Example


The following examples return the modulo of 17 when dividing by 6. The first example uses the % operator, and the second example uses the mod function.

```
17%6
```

```
mod( 17, 6 )
```

The following value is returned.

```
5
```

# netWorkDays


`netWorkDays` returns the number of working days between the provided two date fields or even custom date values generated using other Quick date functions such as `parseDate` or `epochDate` as an integer. 

`netWorkDays` assumes a standard 5-day work week starting from Monday and ending on Friday. Saturday and Sunday are assumed to be weekends. The calculation is inclusive of both `startDate` and `endDate`. The function operates on and shows results for DAY granularity. 

## Syntax


```
netWorkDays(startDate, endDate)
```

## Arguments


 *startDate*   
A valid non-NULL date that acts as the start date for the calculation.   
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.
+ Any combination of the above stated argument values.

 *endDate*   
A valid non-NULL date that acts as the end date for the calculation.   
+ Dataset fields: Any `date` field from the dataset that you are adding this function to.
+ Date Functions: Any date output from another `date` function, for example, `parseDate`.
+ Calculated fields: Any Quick calculated field that returns a `date` value.
+ Parameters: Any Quick `DateTime` parameter.
+ Any combination of the above stated argument values.

## Return type


Integer 

## Ouptut values


Expected output values include:
+ Positive integer (when start\$1date < end\$1date)
+ Negative integer (when start\$1date > end\$1date)
+ NULL when one or both of the arguments get a null value from the `dataset field`.

## Example


The following example returns the number of work days falling between two dates.

Let's assume that there's a field named `application_date` with the following values:

```
netWorkDays({startDate}, {endDate})
```

The following are the given field values.

```
startDate	endDate	netWorkDays
        9/4/2022	9/11/2022	5
        9/9/2022	9/2/2022	-6
        9/10/2022	9/11/2022	0
        9/12/2022	9/12/2022	1
```

The following example calculates the number of days worked by each employee and the salary expended per day for each employee:

```
days_worked = netWorkDays({employment_start_date}, {employment_end_date})
        salary_per_day = {salary}/{days_worked}
```

The following example filters employees whose employment ends on a work day and determines whether their employment began on work day or a weekend using conditional formatting:

```
is_start_date_work_day = netWorkDays(employment_start_date)
        is_end_date_work_day = netWorkDays(employment_end_date)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/netWorkDays-function-example.png)


# Now


For database datasets that directly query the database, `now` returns the current date and time using the settings and format specified by the database server. For SPICE and Salesforce data sets, `now` returns the UTC date and time, in the format `yyyy-MM-ddTkk:mm:ss:SSSZ` (for example, 2015-10-15T19:11:51:003Z). 

## Syntax


```
now()
```

## Return type


Date

# notIn
notIn

`notIn` evaluates if an expression exists within a literal list. If the list doesn’t contain the expression, `notIn` returns true, and otherwise it returns false. `notIn` is case sensitive for string type inputs.

`notIn` accepts two kinds of literal list, one is manually entered list and the other is a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html).

## Syntax


Using a manually entered list:

```
notIn(expression, [literal-1, ...])  
```

Using a multivalue parameter:

```
notIn(expression, $multivalue_parameter)
```

## Arguments


 *expression*   
The expression to be compared with the elements in literal list. It can be a field name like `address`, a literal value like ‘ **Unknown**’, a single value parameter, or a call to another scalar function—provided this function is not an aggregate function or a table calculation.

 *literal list*   
(required) This can be a manually entered list or a multivalue parameter. This argument accepts up to 5,000 elements. However, in a direct query to a third party data source, for example Oracle or Teradata, the restriction can be smaller.  
+ ***manually entered list*** – One or more literal values in a list to be compared with the expression. The list should be enclosed in square brackets. All the literals to compare must have the same datatype as the expression. 
+ ***multivalue parameter*** – A pre-defined multivalue parameter passed in as a literal list. The multivalue parameter must have the same datatype as the expression. 


## Return type


Boolean: TRUE/FALSE

## Example with a manually entered list


The following example evaluates the `origin_state_name` field for values in a list of string. When comparing string type input, `notIn` only supports case sensitive comparison.

```
notIn(origin_state_name,["Georgia", "Ohio", "Texas"])
```

The following are the given field values.

```
"Washington"
        "ohio"
        "Texas"
```

For these field values the following values are returned.

```
true
        true
        false
```

The third return value is false because only "Texas" is one of the excluded values.

The following example evaluates the `fl_date` field for values in a list of string. In order to match the type, `toString` is used to cast the date type to string type.

```
notIn(toString(fl_date),["2015-05-14","2015-05-15","2015-05-16"])
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/notin-function-example-manual-list.png)


Literals and NULL values are supported in expression argument to be compared with the literals in list. Both of the following two examples will generate a new column of FALSE values. 

```
notIn("Washington",["Washington","Ohio"])
```

```
notIn(NULL,[NULL,"Ohio"])
```

## Example with mutivalue parameter


Let's say an author creates a [multivalue parameter](https://docs.amazonaws.cn/quicksight/latest/user/parameters-in-quicksight.html) that contains a list of all the state names. Then the author adds a control to allow the reader to select values from the list.

Next, the reader selects three values—"Georgia", "Ohio", and "Texas"—from the parameter's drop down list control. In this case, the following expression is equivalent to the first example, where those three state names are passed as the literal list to be compared with the `original_state_name` field. 

```
notIn (origin_state_name, ${stateName MultivalueParameter})
```

## Example with `ifelse`


`notIn` can be nested in other functions as a boolean value. One example is that authors can evaluate any expression in a list and return the value they want by using `notIn` and `ifelse`. The following example evaluates if the `dest_state_name` of a flight is in a particular list of US states and returns different categories of the states based on the comparison.

```
ifelse(notIn(dest_state_name,["Washington", "Oregon","California"]), "notWestCoastUSState", "WestCoastUSState")
```

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/notin-function-with-ifelse.png)


# nullIf


`nullIf` compares two expressions. If they are equal, the function returns null. If they are not equal, the function returns the first expression.

## Syntax


```
nullIf(expression1, expression2)
```

## Arguments


`nullIf` takes two expressions as arguments. 

 *expression*   
The expression can be numeric, datetime, or string. It can be a field name, a literal value, or another function. 

## Return type


String

## Example


The following example returns nulls if the reason for a shipment delay is unknown.

```
nullIf(delayReason, 'unknown')
```

The following are the given field values.

```
delayReason
============
unknown         
back ordered 
weather delay
```

For these field values, the following values are returned.

```
(null)
back ordered 
weather delay
```

# parseDate
parseDate

`parseDate` parses a string to determine if it contains a date value, and returns a standard date in the format `yyyy-MM-ddTkk:mm:ss.SSSZ` (using the format pattern syntax specified in [Class DateTimeFormat](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html) in the Joda project documentation), for example 2015-10-15T19:11:51.003Z. This function returns all rows that contain a date in a valid format and skips any rows that don't, including rows that contain null values.

Quick supports dates in the range from Jan 1, 1900 00:00:00 UTC to Dec 31, 2037 23:59:59 UTC. For more information, see [Supported date formats](https://docs.amazonaws.cn/quicksight/latest/user/supported-date-formats.html).

## Syntax


```
parseDate(expression, ['format'])
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'1/1/2016'**, or a call to another function that outputs a string.

 *format*   
(Optional) A string containing the format pattern that *date\$1string* must match. For example, if you are using a field with data like **01/03/2016**, you specify the format 'MM/dd/yyyy'. If you don't specify a format, it defaults to `yyyy-MM-dd`. Rows whose data doesn't conform to *format* are skipped.   
Different date formats are supported based on the type of dataset used. Use the following table to see details of supported date formats.    
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/parseDate-function.html)

## Return type


Date

## Example


The following example evaluates `prodDate` to determine if it contains date values.

```
parseDate(prodDate, 'MM/dd/yyyy')
```

The following are the given field values.

```
prodDate
--------
01-01-1999
12/31/2006
1/18/1982 
7/4/2010
```

For these field values, the following rows are returned.

```
12-31-2006T00:00:00.000Z
01-18-1982T00:00:00.000Z
07-04-2010T00:00:00.000Z
```

# parseDecimal
parseDecimal

`parseDecimal` parses a string to determine if it contains a decimal value. This function returns all rows that contain a decimal, integer, or null value, and skips any rows that don't. If the row contains an integer value, it is returned as a decimal with up to 4 decimal places. For example, a value of '2' is returned as '2.0'.

## Syntax


```
parseDecimal(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'9.62'**, or a call to another function that outputs a string.

## Return type


Decimal(Fixed) in the legacy data preparation experience.

Decimal(Float) in the new data preparation experience.

## Example


The following example evaluates `fee` to determine if it contains decimal values.

```
parseDecimal(fee)
```

The following are the given field values.

```
fee
--------
2
2a
12.13
3b
3.9
(null)
198.353398
```

For these field values, the following rows are returned.

```
2.0
12.13
3.9
(null)
198.3533
```

# parseInt
parseInt

`parseInt` parses a string to determine if it contains an integer value. This function returns all rows that contain a decimal, integer, or null value, and skips any rows that don't. If the row contains a decimal value, it is returned as the nearest integer, rounded down. For example, a value of '2.99' is returned as '2'.

## Syntax


```
parseInt(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'3'**, or a call to another function that outputs a string.

## Return type


Integer

## Example


The following example evaluates `feeType` to determine if it contains integer values.

```
parseInt(feeType)
```

The following are the given field values.

```
feeType
--------
2
2.1
2a
3
3b
(null)
5
```

For these field values, the following rows are returned.

```
2
2
3
(null)
5
```

# parseJson
parseJson

Use `parseJson` to extract values from a JSON object. 

If your dataset is stored in Quick SPICE, you can use `parseJson` when you are preparing a data set, but not in calculated fields during analysis.

For direct query, you can use `parseJson` both during data preparation and analysis. The `parseJson` function applies to either strings or to JSON native data types, depending on the dialect, as shown in the following table.


| Dialect | Type | 
| --- | --- | 
| PostgreSQL | JSON | 
| Amazon Redshift | String | 
| Microsoft SQL Server | String | 
| MySQL | JSON | 
| Teradata | JSON | 
| Oracle | String | 
| Presto | String | 
| Snowflake | Semistructured data type object and array | 
| Hive | String | 

## Syntax


```
parseJson(fieldName, path)
```

## Arguments


 *fieldName*   
The field containing the JSON object that you want to parse.

 *path*   
The path to the data element you want to parse from the JSON object. Only letters, numbers, and blank spaces are supported in the path argument. Valid path syntax includes:  
+ *\$1* – Root object
+ *.* – Child operator
+ *[ ]* – Subscript operator for array

## Return type


String

## Example


The following example evaluates incoming JSON to retrieve a value for item quantity. By using this during data preparation, you can create a table out of the JSON.

```
parseJson({jsonField}, “$.items.qty”)
```

The following shows the JSON.

```
{
    "customer": "John Doe",
    "items": {
        "product": "Beer",
        "qty": 6
    },
    "list1": [
        "val1",
        "val2"
    ],
    "list2": [
        {
            "list21key1": "list1value1"
        }
    ]
}
```

For this example, the following value is returned.

```
6
```

## Example


The following example evaluates `JSONObject1` to extract the first key value pair (KVP), labeled `"State"`, and assign the value to the calculated field that you are creating.

```
parseJson(JSONObject1, “$.state”)
```

The following are the given field values.

```
JSONObject1
-----------
{"State":"New York","Product":"Produce","Date Sold":"1/16/2018","Sales Amount":"$3423.39"}
{"State":"North Carolina","Product":"Bakery Products","Date Sold":"2/1/2018","Sales Amount":"$3226.42"}
{"State":"Utah","Product":"Water","Date Sold":"4/24/2018","Sales Amount":"$7001.52"}
```

For these field values, the following rows are returned.

```
New York
North Carolina
Utah
```

# Replace
Replace

`replace` replaces part of a string with another string that you specify. 

## Syntax


```
replace(expression, substring, replacement)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters in *expression* that you want to replace. The substring can occur one or more times in *expression*.

 *replacement*   
The string you want to have substituted for *substring*.

## Return type


String

## Example


The following example replaces the substring 'and' with 'or'.

```
replace('1 and 2 and 3', 'and', 'or')
```

The following string is returned.

```
1 or 2 or 3
```

# Right
Right

`right` returns the rightmost characters from a string, including spaces. You specify the number of characters to be returned.

## Syntax


```
right(expression, limit)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *limit*   
The number of characters to be returned from *expression*, starting from the last character in the string.

## Return type


String

## Example


The following example returns the last five characters from a string.

```
right('Seattle Store#14', 12)
```

The following value is returned.

```
tle Store#14
```

# Round


`round` rounds a decimal value to the closest integer if no scale is specified, or to the closest decimal place if scale is specified.

## Syntax


```
round(decimal, scale)
```

## Arguments


 *decimal*   
A field that uses the decimal data type, a literal value like **17.62**, or a call to another function that outputs a decimal.

 *scale*   
The number of decimal places to use for the return values.

## Return type



| Operand | Return type in the legacy data preparation experience | Return type in the new data preparation experience | 
| --- | --- | --- | 
|  INT  |  DECIMAL(FIXED)  |  DECIMAL(FIXED)  | 
|  DECIMAL(FIXED)  |  DECIMAL(FIXED)  |  DECIMAL(FIXED)  | 
|  DECIMAL(FLOAT)  |  DECIMAL(FIXED)  |  DECIMAL(FLOAT)  | 

## Example


The following example rounds a decimal field to the closest second decimal place.

```
round(salesAmount, 2)
```

The following are the given field values.

```
20.1307
892.0388
57.5447
```

For these field values, the following values are returned.

```
20.13
892.04
58.54
```

# Rtrim
Rtrim

`rtrim` removes following blank space from a string. 

## Syntax


```
rtrim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the following spaces from a string.

```
rtrim('Seattle Store #14   ')
```

For these field values, the following values are returned.

```
Seattle Store #14
```

# Split
Split

`split` splits a string into an array of substrings, based on a delimiter that you choose, and returns the item specified by the position.

You can only add `split` to a calculated field during data preparation, not to an analysis. This function is not supported in direct queries to Microsoft SQL Server.

## Syntax


```
split(expression, delimiter , position)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street;1402 35th Ave;1818 Elm Ct;11 Janes Lane'**, or a call to another function that outputs a string.

 *delimiter*   
The character that delimits where the string is broken into substrings. For example, `split('one|two|three', '|', 2)` becomes the following.  

```
one
two
three
```
If you choose `position = 2`, `split` returns `'two'`.

 *position*   
(Required) The position of the item to return from the array. The position of the first item in the array is 1.

## Return type


String array

## Example


The following example splits a string into an array, using the semicolon character (;) as the delimiter, and returns the third element of the array.

```
split('123 Test St;1402 35th Ave;1818 Elm Ct;11 Janes Lane', ';', 3)
```

The following item is returned.

```
1818 Elm Ct
```

This function skips items containing null values or empty strings. 

# Sqrt


`sqrt` returns the square root of a given expression. 

## Syntax


```
sqrt(expression)
```

## Arguments


 *expression*   
The expression must be numeric. It can be a field name, a literal value, or another function. 

# startsWith
startsWith

`startsWith` evaluates if the expression starts with a substring that you specify. If the expression starts with the substring, `startsWith` returns true, and otherwise it returns false.

## Syntax


```
startsWith(expression, substring, string-comparison-mode)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

 *substring*   
The set of characters to check against the *expression*. The substring can occur one or more times in the *expression*.

 *string-comparison-mode*   
(Optional) Specifies the string comparison mode to use:  
+ `CASE_SENSITIVE` – String comparisons are case-sensitive. 
+ `CASE_INSENSITIVE` – String comparisons are case-insensitive.
This value defaults to `CASE_SENSITIVE` when blank.

## Return type


Boolean

## Examples


### Default case sensitive example


The following case sensitive example evaluates if `state_nm` startsWith **New**.

```
startsWith(state_nm, "New")
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
false
```

### Case insensitive example


The following case insensitive example evaluates if `state_nm` startsWith **new**.

```
startsWith(state_nm, "new", CASE_INSENSITIVE)
```

The following are the given field values.

```
New York
new york
```

For these field values, the following values are returned.

```
true
true
```

### Example with conditional statements


The `startsWith` function can be used as the conditional statement within the following If functions: [avgIf](https://docs.amazonaws.cn/quicksight/latest/user/avgIf-function.html), [minIf](https://docs.amazonaws.cn/quicksight/latest/user/minIf-function.html), [distinct\$1countIf](https://docs.amazonaws.cn/quicksight/latest/user/distinct_countIf-function.html), [countIf](https://docs.amazonaws.cn/quicksight/latest/user/countIf-function.html), [maxIf](https://docs.amazonaws.cn/quicksight/latest/user/maxIf-function.html), [medianIf](https://docs.amazonaws.cn/quicksight/latest/user/medianIf-function.html), [stdevIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevIf-function.html), [stdevpIf](https://docs.amazonaws.cn/quicksight/latest/user/stdevpIf-function.html), [sumIf](https://docs.amazonaws.cn/quicksight/latest/user/sumIf-function.html), [varIf](https://docs.amazonaws.cn/quicksight/latest/user/varIf-function.html), and [varpIf](https://docs.amazonaws.cn/quicksight/latest/user/varpIf-function.html). 

The following example sums `Sales` only if state\$1nm starts with **New**.

```
sumIf(Sales,startsWith(state_nm, "New"))
```

### Does NOT contain example


The conditional `NOT` operator can be used to evaluate if the expression does not start with the specified substring. 

```
NOT(startsWith(state_nm, "New"))
```

### Example using numeric values


Numeric values can be used in the expression or substring arguments by applying the `toString` function.

```
startsWith(state_nm, toString(5) )
```

# Strlen
Strlen

`strlen` returns the number of characters in a string, including spaces.

## Syntax


```
strlen(expression)
```

## Arguments


 *expression*   
An expression can be the name of a field that uses the string data type like **address1**, a literal value like **'Unknown'**, or another function like `substring(field_name,0,5)`.

## Return type


Integer

## Example


The following example returns the length of the specified string.

```
strlen('1421 Main Street')
```

The following value is returned.

```
16
```

# Substring
Substring

`substring` returns the characters in a string, starting at the location specified by the *start* argument and proceeding for the number of characters specified by the *length* arguments. 

## Syntax


```
substring(expression, start, length)
```

## Arguments


 *expression*   
An expression can be the name of a field that uses the string data type like **address1**, a literal value like **'Unknown'**, or another function like `substring(field_name,1,5)`.

 *start*   
The character location to start from. *start* is inclusive, so the character at the starting position is the first character in the returned value. The minimum value for *start* is 1. 

 *length*   
The number of additional characters to include after *start*. *length* is inclusive of *start*, so the last character returned is (*length* - 1) after the starting character.

## Return type


String

## Example


The following example returns the 13th through 19th characters in a string. The beginning of the string is index 1, so you begin counting at the first character.

```
substring('Fantasy and Science Fiction',13,7)
```

The following value is returned.

```
Science
```

# switch


`switch` compares a *condition-expression* with the literal labels, within a set of literal label and *return-expression* pairings. It then returns the *return-expression* corresponding to the first literal label that's equal to the *condition-expression*. If no label equals to the *condition-expression*, `switch` returns the *default-expression*. Every *return-expression* and *default-expression* must have the same datatype.

## Syntax


```
switch(condition-expression, label-1, return-expression-1 [, label-n, return-expression-n ...], 
        default-expression)
```

## Arguments


`switch` requires one or more *if*,*then* expression pairings, and requires exactly one expression for the *else* argument. 

 *condition-expression*   
The expression to be compared with the label-literals. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. 

 *label*   
The literal to be compared with the *condition-expression* argument, all of the literals must have the same data type as *condition-expression* argument. `switch` accepts up to 5000 labels. 

 *return-expression*   
The expression to return if the value of its label equals to the value of the *condition-expression*. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. All of the *return-expression* arguments must have the same data type as the *default-expression*.

 *default-expression*   
The expression to return if no value of any label arguments equals to the value of *condition-expression*. It can be a field name like `address`, a literal value like '`Unknown`', or another scalar function like `toString(salesAmount)`. The *default-expression* must have the same data type as all of the *return-expression* arguments.

## Return type


`switch` returns a value of the same data type as the values in *return-expression*. All data returned *return-expression* and *default-expression* must be of the same data type or be converted to the same data type. 

## General Examples


The following example returns the Amazon Web Services Region code of input region name. 

```
switch(region_name, 
               "US East (N. Virginia)", "us-east-1", 
               "Europe (Ireland)", "eu-west-1", 
               "US West (N. California)", "us-west-1", 
               "other regions")
```

The following are the given field values.

```
"US East (N. Virginia)"
        "US West (N. California)"
        "Asia Pacific (Tokyo)"
```

For these field values the following values are returned.

```
"us-east-1"
        "us-west-1"
        "other regions"
```

## Use switch to replace `ifelse`


The following `ifelse` use case is an equivalent of the previous example, for `ifelse` evaluating whether values of one field equals to different literal values, using `switch` instead is a better choice.

```
ifelse(region_name = "US East (N. Virginia)", "us-east-1", 
               region_name = "Europe (Ireland)", "eu-west-1", 
               region_name = "US West (N. California)", "us-west-1", 
               "other regions")
```

## Expression as return value


The following example uses expressions in *return-expressions*:

```
switch({origin_city_name}, 
               "Albany, NY", {arr_delay} + 20, 
               "Alexandria, LA", {arr_delay} - 10,
               "New York, NY", {arr_delay} * 2, 
               {arr_delay})
```

The preceding example changes the expected delay time for each flight from a particular city.

![\[An image of the results of the function example, shown in table form.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/switch-function-example.png)


# toLower
toLower

`toLower` formats a string in all lowercase. `toLower` skips rows containing null values.

## Syntax


```
toLower(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example converts a string value into lowercase.

```
toLower('Seattle Store #14')
```

The following value is returned.

```
seattle store #14
```

# toString
toString

`toString` formats the input expression as a string. `toString` skips rows containing null values.

## Syntax


```
toString(expression)
```

## Arguments


 *expression*   
 An expression can be a field of any data type, a literal value like **14.62**, or a call to another function that returns any data type.

## Return type


String

## Example


The following example returns the values from `payDate` (which uses the `date` data type) as strings.

```
toString(payDate)
```

The following are the given field values.

```
payDate
--------
1992-11-14T00:00:00.000Z
2012-10-12T00:00:00.000Z
1973-04-08T00:00:00.000Z
```

For these field values, the following rows are returned.

```
1992-11-14T00:00:00.000Z
2012-10-12T00:00:00.000Z
1973-04-08T00:00:00.000Z
```

# toUpper
toUpper

`toUpper` formats a string in all uppercase. `toUpper` skips rows containing null values.

## Syntax


```
toUpper(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example converts a string value into uppercase.

```
toUpper('Seattle Store #14')
```

The following value is returned.

```
SEATTLE STORE #14
```

# trim
trim

`trim` removes both preceding and following blank space from a string. 

## Syntax


```
trim(expression)
```

## Arguments


 *expression*   
The expression must be a string. It can be the name of a field that uses the string data type, a literal value like **'12 Main Street'**, or a call to another function that outputs a string.

## Return type


String

## Example


The following example removes the following spaces from a string.

```
trim('   Seattle Store #14   ')
```

For these field values, the following values are returned.

```
Seattle Store #14
```

# truncDate


`truncDate` returns a date value that represents a specified portion of a date. For example, requesting the year portion of the value 2012-09-02T00:00:00.000Z returns 2012-01-01T00:00:00.000Z. Specifying a time-related period for a date that doesn't contain time information returns the initial date value unchanged.

## Syntax


```
truncDate('period', date)
```

## Arguments


 *period*   
The period of the date that you want returned. Valid periods are as follows:  
+ YYYY: This returns the year portion of the date.
+ Q: This returns the date of the first day of the quarter that the date belongs to. 
+ MM: This returns the month portion of the date.
+ DD: This returns the day portion of the date.
+ WK: This returns the week portion of the date. The week starts on Sunday in Amazon Quick.
+ HH: This returns the hour portion of the date.
+ MI: This returns the minute portion of the date.
+ SS: This returns the second portion of the date.
+ MS: This returns the millisecond portion of the date.

 *date*   
A date field or a call to another function that outputs a date.

## Return type


Date

## Example


The following example returns a date representing the month of the order date.

```
truncDate('MM', orderDate)
```

The following are the given field values.

```
orderDate      
=========
2012-12-14T00:00:00.000Z  
2013-12-29T00:00:00.000Z
2012-11-15T00:00:00.000Z
```

For these field values, the following values are returned.

```
2012-12-01T00:00:00.000Z
2013-12-01T00:00:00.000Z
2012-11-01T00:00:00.000Z
```

# Aggregate functions


Aggregate functions are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition.

When a calculated field formula contains an aggregation, it becomes a custom aggregation. To make sure that your data is accurately displayed, Amazon Quick applies the following rules:
+ Custom aggregations can't contain nested aggregate functions. For example, this formula doesn't work: `sum(avg(x)/avg(y))`. However, nesting nonaggregated functions inside or outside aggregate functions does work. For example, `ceil(avg(x))` works. So does `avg(ceil(x))`.
+ Custom aggregations can't contain both aggregated and nonaggregated fields, in any combination. For example, this formula doesn't work: `Sum(sales)+quantity`.
+ Filter groups can't contain both aggregated and nonaggregated fields.
+ Custom aggregations can't be converted to a dimension. They also can't be dropped into the field well as a dimension.
+ In a pivot table, custom aggregations can't be added to table calculations.
+ Scatter plots with custom aggregations need at least one dimension under **Group/Color** in the field wells.

For more information about supported functions and operators, see [Calculated field function and operator reference for Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/calculated-field-reference.html). 

The aggregate functions for calculated fields in Quick include the following.

**Topics**
+ [

# avg
](avg-function.md)
+ [

# avgIf
](avgIf-function.md)
+ [

# count
](count-function.md)
+ [

# countIf
](countIf-function.md)
+ [

# distinct\$1count
](distinct_count-function.md)
+ [

# distinct\$1countIf
](distinct_countIf-function.md)
+ [

# max
](max-function.md)
+ [

# maxIf
](maxIf-function.md)
+ [

# median
](median-function.md)
+ [

# medianIf
](medianIf-function.md)
+ [

# min
](min-function.md)
+ [

# minIf
](minIf-function.md)
+ [

# percentile
](percentile-function.md)
+ [

# percentileCont
](percentileCont-function.md)
+ [

# percentileDisc (percentile)
](percentileDisc-function.md)
+ [

# periodToDateAvg
](periodToDateAvg-function.md)
+ [

# periodToDateCount
](periodToDateCount-function.md)
+ [

# periodToDateMax
](periodToDateMax-function.md)
+ [

# periodToDateMedian
](periodToDateMedian-function.md)
+ [

# periodToDateMin
](periodToDateMin-function.md)
+ [

# periodToDatePercentile
](periodToDatePercentile-function.md)
+ [

# periodToDatePercentileCont
](periodToDatePercentileCont-function.md)
+ [

# periodToDateStDev
](periodToDateStDev-function.md)
+ [

# periodToDateStDevP
](periodToDateStDevP-function.md)
+ [

# periodToDateSum
](periodToDateSum-function.md)
+ [

# periodToDateVar
](periodToDateVar-function.md)
+ [

# periodToDateVarP
](periodToDateVarP-function.md)
+ [

# stdev
](stdev-function.md)
+ [

# stdevp
](stdevp-function.md)
+ [

# stdevIf
](stdevIf-function.md)
+ [

# stdevpIf
](stdevpIf-function.md)
+ [

# sum
](sum-function.md)
+ [

# sumIf
](sumIf-function.md)
+ [

# var
](var-function.md)
+ [

# varIf
](varIf-function.md)
+ [

# varp
](varp-function.md)
+ [

# varpIf
](varpIf-function.md)

# avg


The `avg` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avg(salesAmount)` returns the average for that measure grouped by the (optional) chosen dimension.

## Syntax


```
avg(decimal, [group-by level])
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the average sales.

```
avg({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
avg({Sales}, [{Country}])
```

![\[Average sales numbers are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/avg-function-example.png)


# avgIf


Based on a conditional statement, the `avgIf` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avgIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the average for that measure grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
avgIf(dimension or measure, condition) 
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# count


The `count` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `count(product type)` returns the total number of product types grouped by the (optional) chosen dimension, including any duplicates. The `count(sales)` function returns the total number of sales completed grouped by the (optional) chosen dimension, for example salesperson.

## Syntax


```
count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the count of sales by a specified dimension in the visual. In this example, the count of sales by month are shown.

```
count({Sales})
```

![\[The count of sales by month.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the count of sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
count({Sales}, [{Country}])
```

![\[Count of sales are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example2.png)


# countIf


Based on a conditional statement, the `countIf` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions.

## Syntax


```
countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Return type


Integer

## Example


The following function returns a count of the sales transactions (`Revenue`) that meet the conditions, including any duplicates. 

```
countIf (
    Revenue,
    # Conditions
        CalendarDay >= ${BasePeriodStartDate} AND 
        CalendarDay <= ${BasePeriodEndDate} AND 
        SourcingType <> 'Indirect'
)
```

# distinct\$1count


The `distinct_count` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_count(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_count(ship date)` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region.

## Syntax


```
distinct_count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Example


The following example calculates the total number of dates when products were ordered grouped by the (optional) chosen dimension in the visual, for example region.

```
distinct_count({Order Date})
```

![\[The total number of dates when products were ordered in each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region) in the visual.

```
distinct_count({Order Date}, [Country])
```

![\[The total number of dates when products were ordered in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example2.png)


# distinct\$1countIf


Based on a conditional statement, the `distinct_countIf` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_countIf(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_countIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region, if the condition evaluates to true.

## Syntax


```
distinct_countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# max


The `max` function returns the maximum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `max(sales goal)` returns the maximum sales goals grouped by the (optional) chosen dimension.

## Syntax


```
max(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Maximum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the max sales value for each region. It is compared to the total, minimum, and median sales values.

```
max({Sales})
```

![\[The maximum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the max sales at the Country level, but not across other dimensions (Region) in the visual.

```
max({Sales}, [Country])
```

![\[The maximum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/max-function-example2.png)


# maxIf


Based on a conditional statement, the `maxIf` function returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `maxIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the maximum sales goals grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
maxIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# median


The `median` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `median(revenue)` returns the median revenue grouped by the (optional) chosen dimension. 

## Syntax


```
median(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the median sales value for each region. It is compared to the total, maximum, and minimum sales.

```
median({Sales})
```

![\[The median sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the median sales at the Country level, but not across other dimensions (Region) in the visual.

```
median({Sales}, [Country])
```

![\[The median sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/median-function-example2.png)


# medianIf


Based on a conditional statement, the `medianIf` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `medianIf(Revenue,SaleDate >= ${BasePeriodStartDate} AND SaleDate <= ${BasePeriodEndDate})` returns the median revenue grouped by the (optional) chosen dimension, if the condition evaluates to true. 

## Syntax


```
medianIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# min


The `min` function returns the minimum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `min(return rate)` returns the minimum rate of returns grouped by the (optional) chosen dimension.

## Syntax


```
min(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Minimum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the minimum sales value for each region. It is compared to the total, max, and median sales.

```
min({Sales})
```

![\[The minimum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the minimum sales at the Country level, but not across other dimensions (Region) in the visual.

```
min({Sales}, [Country])
```

![\[The minimum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-function-example2.png)


# minIf


Based on a conditional statement, the `minIf` function returns the minimum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `minIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the minimum rate of returns grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
minIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# percentile


The `percentile` function calculates the percentile of the values in measure, grouped by the dimension that's in the field well. There are two varieties of percentile calculation available in Quick:
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html) uses linear interpolation to determine result.
+ [percentileDisc (percentile)](https://docs.amazonaws.cn/quicksight/latest/user/percentileDisc-function.html) uses actual values to determine result. 

The `percentile` function is an alias of `percentileDisc`.

# percentileCont


The `percentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure. It uses the grouping and sorting that are applied in the field wells. It answers questions like: What values are representative of this percentile? To return an exact percentile value that might not be present in your dataset, use `percentileCont`. To return the nearest percentile value that is present in your dataset, use `percentileDisc` instead.

## Syntax


```
percentileCont(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


The `percentileCont` function calculates a result based on a continuous distribution of the values from a specified measure. The result is computed by linear interpolation between the values after ordering them based on settings in the visual. It's different from `percentileDisc`, which simply returns a value from the set of values that are aggregated over. The result from `percentileCont` might or might not exist in the values from the specified measure.

## Examples of percentileCont
Examples

The following examples help explain how percentileCont works.

**Example Comparing median, `percentileCont`, and `percentileDisc`**  
The following example shows the median for a dimension (category) by using the `median`, `percentileCont`, and `percentileDisc` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays a value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
  example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileCont` values for the `example` field. The calculated fields `n%Cont` are defined as `percentileCont( {example} ,n)`. The interpolated values in each column represent the numbers that fall into that percentile bucket. In some cases, the actual data values match the interpolated values. For example, the column `100%Cont` shows the same value for every row because 6783.02 is the highest number.  

```
 example      50%Cont     75%Cont      99%Cont    100%Cont  
 --------- ----------- ----------- ------------ ----------- 

 A             20.97       84.307      699.99      6783.02  
 B             20.99       88.84       880.98      6783.02  
 C             20.99       90.48       842.925     6783.02  
 D             21.38       85.99       808.49      6783.02
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentileCont({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentileCont-function-example-lac.png)


# percentileDisc (percentile)


The `percentileDisc` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The `percentile` function is an alias of `percentileDisc`.

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDisc`. To return an exact percentile value that might not be present in your dataset, use `percentileCont` instead. 

## Syntax


```
percentileDisc(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


`percentileDisc` is an inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification and returns an element from the given set. 

For a given percentile value `P`, `percentileDisc` uses the sorted values in the visual and returns the value with the smallest cumulative distribution value that is greater than or equal to `P`. 

## Examples of percentileDisc
Examples

The following examples help explain how percentileDisc works.

**Example Comparing median, `percentileDisc`, and `percentileCont`**  
The following example shows the median for a dimension (category) by using the `percentileCont`, and `percentileDisc`, and `median` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays the closest value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
 example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileDisc` values for the `example` field. The calculated fields `n%Disc` are defined as `percentileDisc( {example} ,n)`. The values in each column are actual numbers that come from the dataset.   

```
 example     50%Disc      75%Disc        99%Disc      100%Disc
 -------- ----------- ------------ -------------- ------------ 
 A            20.97        73.98         699.99       6783.02
 B            42.19        88.84         820.08       6783.02
 C            30.52        90.48         733.44       6783.02
 D            41.38        85.99         901.29       6783.0
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentile({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentile-function-example-lac.png)


# periodToDateAvg


The `periodToDateAvg` function averages the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateAvg(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateAvg(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDAvgResults.png)


# periodToDateCount


The `periodToDateCount` function calculates the number of values in a dimension or measure, including duplicates, for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateCount(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateCount(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDCountResults.png)


# periodToDateMax


The `periodToDateMax` function returns the maximum value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that point.

## Syntax


```
periodToDateMax(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMax(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMaxResults.png)


# periodToDateMedian


The `periodToDateMedian` function returns the median value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMedian(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMedian(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMedianResults.png)


# periodToDateMin


The `periodToDateMin` function returns the minimum value of the specified measure or date, or a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMin(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMin(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMinResults.png)


# periodToDatePercentile


The `periodToDatePercentile` function calculates the percentile based on the actual numbers in measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period. It uses the grouping and sorting that are applied in the field wells.

To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile`. To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont` instead.

## Syntax


```
periodToDatePercentile(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example. that is 06-27-21.

```
periodToDatePercentile(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDPercentileResults.png)


# periodToDatePercentileCont


The `periodToDatePercentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure for a given time granularity (for instance, a quarter) up to a point in time in that period. It uses the grouping and sorting that are applied in the field wells.

To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont`. To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile` instead.

## Syntax


```
periodToDatePercentileCont(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDatePercentileCont(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDContPercentileResults.png)


# periodToDateStDev


The `periodToDateStDev` function calculates the standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, based on a sample and relative to that period.

## Syntax


```
periodToDateStDev(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDev(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevResults.png)


# periodToDateStDevP


The `periodToDateStDevP` function calculates the population standard deviation of the set of numbers in the specified measure, for a given time granularity (for instance, a quarter) up to a point in time, based on a sample in that period.

## Syntax


```
periodToDateStDevP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDevP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevPResults.png)


# periodToDateSum


The `periodToDateSum` function adds the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateSum(
	measure, 
	dateTime, 
	period, 
	endDate)
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following function calculates the week to date sum of fare amount per payment, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateSum(fare_amount, pickUpDateTime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results for the example, with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDSumResults.png)


# periodToDateVar


The `periodToDateVar` function calculates the sample variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time in that period.

## Syntax


```
periodToDateVar(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVar(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarResults.png)


# periodToDateVarP


The `periodToDateVarP` function calculates the population variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relevant to that period.

## Syntax


```
periodToDateVarP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVarP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarPResults.png)


# stdev


The `stdev` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
stdev(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class, using a sample of the test scores recorded.

```
stdev({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
stdev({Score}, [Subject])
```

# stdevp


The `stdevp` function calculates the population standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
stdevp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class using all the scores recorded.

```
stdevp({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual using all the scores recorded.

```
stdevp({Score}, [Subject])
```

# stdevIf


Based on a conditional statement, the `stdevIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample. 

## Syntax


```
stdevIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# stdevpIf


Based on a conditional statement, the `stdevpIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
stdevpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# sum


The `sum` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sum(profit amount)` returns the total profit amount grouped by the (optional) chosen dimension.

## Syntax


```
sum(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the sum of sales.

```
sum({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the sum of sales at the Country level, but not across other dimensions (Region and Product) in the visual.

```
sum(Sales, [Country])
```

![\[The sum of sales for each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sum-function-example.png)


# sumIf


Based on a conditional statement, the `sumIf` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sumIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the total profit amount grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
sumIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Examples


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB`.

```
sumIf(Sales, Segment=’SMB’)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sumIfCalc.png)


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB` and `Order Date` greater than year 2022.

```
sumIf(Sales, Segment=’SMB’ AND {Order Date} >=’2022-01-01’)
```

# var


The `var` function calculates the sample variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
var(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a sample of test scores.

```
var({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a sample of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
var({Scores}, [Subject]
```

# varIf


Based on a conditional statement, the `varIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
varIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# varp


The `varp` function calculates the population variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
varp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a population of test scores.

```
varp({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a population test scores at the subject level, but not across other dimensions (Class) in the visual.

```
varp({Scores}, [Subject]
```

# varpIf


Based on a conditional statement, the `varpIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
varpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# Table calculation functions


When you are analyzing data in a specific visual, you can apply table calculations to the current set of data to discover how dimensions influence measures or each other. *Visualized data* is your result set based on your current dataset, with all the filters, field selections, and customizations applied. To see exactly what this result set is, you can export your visual to a file. A *table calculation function* performs operations on the data to reveal relationships between fields. 

In this section, you can find a list of the functions available in table calculations that you can perform on visualized data in Amazon Quick. 

To view a list of functions sorted by category, with brief definitions, see [Functions by category](https://docs.amazonaws.cn/quicksight/latest/user/functions-by-category.html). 

**Topics**
+ [

# difference
](difference-function.md)
+ [

# distinctCountOver
](distinctCountOver-function.md)
+ [

# lag
](lag-function.md)
+ [

# lead
](lead-function.md)
+ [

# percentDifference
](percentDifference-function.md)
+ [

# avgOver
](avgOver-function.md)
+ [

# countOver
](countOver-function.md)
+ [

# maxOver
](maxOver-function.md)
+ [

# minOver
](minOver-function.md)
+ [

# percentileOver
](percentileOver-function.md)
+ [

# percentileContOver
](percentileContOver-function.md)
+ [

# percentileDiscOver
](percentileDiscOver-function.md)
+ [

# percentOfTotal
](percentOfTotal-function.md)
+ [

# periodOverPeriodDifference
](periodOverPeriodDifference-function.md)
+ [

# periodOverPeriodLastValue
](periodOverPeriodLastValue-function.md)
+ [

# periodOverPeriodPercentDifference
](periodOverPeriodPercentDifference-function.md)
+ [

# periodToDateAvgOverTime
](periodToDateAvgOverTime-function.md)
+ [

# periodToDateCountOverTime
](periodToDateCountOverTime-function.md)
+ [

# periodToDateMaxOverTime
](periodToDateMaxOverTime-function.md)
+ [

# periodToDateMinOverTime
](periodToDateMinOverTime-function.md)
+ [

# periodToDateSumOverTime
](periodToDateSumOverTime-function.md)
+ [

# stdevOver
](stdevOver-function.md)
+ [

# stdevpOver
](stdevpOver-function.md)
+ [

# varOver
](varOver-function.md)
+ [

# varpOver
](varpOver-function.md)
+ [

# sumOver
](sumOver-function.md)
+ [

# denseRank
](denseRank-function.md)
+ [

# rank
](rank-function.md)
+ [

# percentileRank
](percentileRank-function.md)
+ [

# runningAvg
](runningAvg-function.md)
+ [

# runningCount
](runningCount-function.md)
+ [

# runningMax
](runningMax-function.md)
+ [

# runningMin
](runningMin-function.md)
+ [

# runningSum
](runningSum-function.md)
+ [

# firstValue
](firstValue-function.md)
+ [

# lastValue
](lastValue-function.md)
+ [

# windowAvg
](windowAvg-function.md)
+ [

# windowCount
](windowCount-function.md)
+ [

# windowMax
](windowMax-function.md)
+ [

# windowMin
](windowMin-function.md)
+ [

# windowSum
](windowSum-function.md)

# difference


The `difference` function calculates the difference between a measure based on one set of partitions and sorts, and a measure based on another. 

The `difference` function is supported for use with analyses based on SPICE and direct query data sets.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
difference
	(
	     measure 
	     ,[ sortorder_field ASC_or_DESC, ... ]
	     ,lookup_index,
	     ,[ partition field, ... ] 
	)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB and Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the difference between of `sum({Billed Amount})`, sorted by `Customer Region` ascending, compared to the next row, and partitioned by `Service Line`.

```
difference(
     sum( {Billed Amount} ), 
     [{Customer Region} ASC],
     1,
     [{Service Line}]
)
```

The following example calculates the difference between `Billed Amount` compared to the next line, partitioned by (`[{Customer Region}]`). The fields in the table calculation are in the field wells of the visual.

```
difference(
     sum( {Billed Amount} ), 
     [{Customer Region} ASC],
     1
)
```

The red highlights show how each amount is added ( a \$1 b = c ) to show the difference between amounts a and c. 

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/differenceCalc.png)


# distinctCountOver


The `distinctCountOver` function calculates the distinct count of the operand partitioned by the specified attributes at a specified level. Supported levels are `PRE_FILTER` and `PRE_AGG`. The operand must be unaggregated.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
distinctCountOver
(
  measure or dimension field 
  ,[ partition_field, ... ]  
  ,calculation level 
)
```

## Arguments


 *measure or dimension field*   
The measure or dimension that you want to do the calculation for, for example `{Sales Amt}`. Valid values are `PRE_FILTER` and `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
This value defaults to `POST_AGG_FILTER` when blank. `POST_AGG_FILTER` is not a valid level for this operation and will result in an error message. For more information, see [Using level-aware calculations in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the distinct count of `Sales` partitioned over `City` and `State` at the `PRE_AGG` level.

```
distinctCountOver
(
  Sales, 
  [City, State], PRE_AGG
)
```

# lag


The `lag` function calculates the lag (previous) value for a measure based on specified partitions and sorts.

`lag` is supported for use with analyses based on SPICE and direct query data sets.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lag
(
lag
(
 measure
 ,[ sortorder_field ASC_or_DESC, ... ] 
 ,lookup_index
 ,[ partition_field, ... ] 
)] 
)
```

## Arguments


*measure*   
The measure that you want to get the lag for. This can include an aggregate, for example `sum({Sales Amt})`.

*sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

*lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB, and Amazon Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the previous `sum(sales)`, partitioned by the state of origin, in the ascending sort order on `cancellation_code`.

```
lag
(
     sum(Sales), 
     [cancellation_code ASC], 
     1, 
     [origin_state_nm]
)
```

The following example uses a calculated field with `lag` to display sales amount for the previous row next to the amount for the current row, sorted by `Order Date`. The fields in the table calculation are in the field wells of the visual.

```
lag(
     sum({Sales}),
     [{Order Date} ASC],
     1
)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/lagCalc.png)


The following example uses a calculated field with `lag` to display the sales amount for the previous row next to the amount for the current row, sorted by `Order Date` partitioned by `Segment`.

```
lag
	(
		sum(Sales),
		[Order Date ASC],
		1, [Segment]
	)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/lagCalc2.png)


# lead


The `lead` function calculates the lead (following) value for a measure based on specified partitions and sorts.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lead
(
     measure
     ,[ sortorder_field ASC_or_DESC, ... ]  
     ,lookup_index,
     ,[ partition_field, ... ]
)
```

## Arguments


*measure*   
The measure that you want to get the lead for. This can include an aggregate, for example `sum({Sales Amt})`.

*sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

*lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB, and Amazon Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the next `sum(sales)`, partitioned by the state of origin, in the ascending sort order on `cancellation_code`.

```
lead
(
     sum(sales), 
     [cancellation_code ASC], 
     1, 
     [origin_state_nm]
)
```

The following example uses a calculated field with lead to display the amount for the next row beside the amount for the current row, sorted by `Customer Segment`. The fields in the table calculation are in the field wells of the visual.

```
lead(
     sum({Billed Amount}),
     [{Customer Segment} ASC],
     1
)
```

The following screenshot shows the results of the example.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/leadCalc.png)


# percentDifference


The `percentDifference` function calculates the percentage difference between the current value and a comparison value, based on partitions, sorts, and lookup index. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentDifference
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,lookup index
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the percent difference for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *lookup index*   
The lookup index can be positive or negative, indicating a following row in the sort (positive) or a previous row in the sort (negative). The lookup index can be 1–2,147,483,647. For the engines MySQL, MariaDB and Aurora MySQL-Compatible Edition, the lookup index is limited to just 1.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the percentage of difference between the `sum(Sales)` for the current and the previous `State`, sorted by `Sales`.

```
percentDifference
(
  sum(amount), 
  [sum(amount) ASC],
  -1, 
  [State]
)
```

The following example calculates the percent that a specific `Billed Amount` is of another `Billed Amount`, sorted by (`[{Customer Region} ASC]`). The fields in the table calculation are in the field wells of the visual.

```
percentDifference
(
  sum( {Billed Amount} ), 
  [{Customer Region} ASC],
  1
)
```

The following screenshot shows the results of the example. The red letters show that the total `Billed Amount` for the `Customer Region` **APAC** is 24 percent less than the amount for the **EMEA** region.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentDifference.png)


# avgOver


The `avgOver` function calculates the average of a measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
avgOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

The following example shows the average `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
avgOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the average of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/avgOver.png)


## Arguments


 *measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the average `sum(Sales)` partitioned over `City` and `State`. 

```
avgOver
(
     sum(Sales), 
     [City, State]
)
```

# countOver


The `countOver` function calculates the count of a dimension or measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
countOver
(
  measure or dimension field 
  ,[ partition_field, ... ]  
  ,calculation level 
)
```

## Arguments


 *measure or dimension field*   
The measure or dimension that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example gets the count of `Sales` partitioned over `City` and `State`. 

```
countOver
(
  Sales, 
  [City, State]
)
```

The following example gets the count of `{County}` partitioned over `City` and `State`. 

```
countOver
(
  {County}, 
  [City, State]
)
```

The following example shows the count of `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
countOver
(
  sum({Billed Amount}),
  [{Customer Region}]
)
```

The following screenshot shows the results of the example. Because there are no other fields involved, the count is one for each region.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver1.png)


If you add additional fields, the count changes. In the following screenshot, we add `Customer Segment` and `Service Line`. Each of those fields contains three unique values. With 3 segments, 3 service lines, and 3 regions, the calculated field shows 9.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver2.png)


If you add the two additional fields to the partitioning fields in the calculated field, `countOver( sum({Billed Amount}), [{Customer Region}, {Customer Segment}, {Service Line}]`, then the count is again 1 for each row.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/countOver.png)


# maxOver


The `maxOver` function calculates the maximum of a measure or date partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
maxOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


 *measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the maximum `sum(Sales)`, partitioned by `City` and `State`.

```
maxOver
(
     sum(Sales), 
     [City, State]
)
```

The following example shows the maximum `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
maxOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the maximum of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/maxOver.png)


# minOver


The `minOver` function calculates the minimum of a measure or date partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
minOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the min `sum(Sales)`, partitioned by `City` and `State`.

```
minOver
(     
     sum(Sales), 
     [City, State]
)
```

The following example shows the minimum `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
minOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Service Line`, the total amount billed for each is displayed, and the minimum of these three values displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/minOver.png)


# percentileOver


The `percentileOver` function calculates the *n*th percentile of a measure partitioned by a list of dimensions. There are two varieties of the `percentileOver` calculation available in Quick:
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileContOver-function.html) uses linear interpolation to determine result.
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileDiscOver-function.html) uses actual values to determine result. 

The `percentileOver` function is an alias of `percentileDiscOver`.

# percentileContOver


The `percentileContOver` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The result is partitioned by the specified dimension at the specified calculation level. 

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDiscOver`. To return an exact percentile value that might not be present in your dataset, use `percentileContOver` instead. 

## Syntax


```
percentileContOver (
    measure
  , percentile-n
  , [partition-by, …]
  , calculation-level
)
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile-n*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *partition-by*   
(Optional) One or more dimensions that you want to partition by, separated by commas. Each field in the list is enclosed in \$1 \$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation-level*   
 Specifies where to perform the calculation in relation to the order of evaluation. There are three supported calculation levels:  
+ PRE\$1FILTER
+ PRE\$1AGG
+ POST\$1AGG\$1FILTER (default) – To use this calculation level, specify an aggregation on `measure`, for example `sum(measure)`.
PRE\$1FILTER and PRE\$1AGG are applied before the aggregation occurs in a visualization. For these two calculation levels, you can't specify an aggregation on `measure` in the calculated field expression. To learn more about calculation levels and when they apply, see [Order of evaluation in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/order-of-evaluation-quicksight.html) and [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Returns


The result of the function is a number. 

## Example of percentileContOver
Example

The following example helps explain how percentileContOver works.

**Example Comparing calculation levels for the median**  
The following example shows the median for a dimension (category) by using different calculation levels with the `percentileContOver` function. The percentile is 50. The dataset is filtered by a region field. The code for each calculated field is as follows:  
+ `example = left( category, 1 )` (A simplified example.)
+ `pre_agg = percentileContOver ( {Revenue} , 50 , [ example ] , PRE_AGG)`
+ `pre_filter = percentileContOver ( {Revenue} , 50 , [ example ] , PRE_FILTER) `
+ `post_agg_filter = percentileContOver ( sum ( {Revenue} ) , 50 , [ example ], POST_AGG_FILTER )`

```
example   pre_filter     pre_agg      post_agg_filter
------------------------------------------------------
0            106,728     119,667            4,117,579
1            102,898      95,946            2,307,547
2             97,807      93,963              554,570  
3            101,043     112,585            2,709,057
4             96,533      99,214            3,598,358
5            106,293      97,296            1,875,648
6             97,118      69,159            1,320,672
7            100,201      90,557              969,807
```

# percentileDiscOver


The `percentileDiscOver` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The result is partitioned by the specified dimension at the specified calculation level. The `percentileOver` function is an alias of `percentileDiscOver`.

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDiscOver`. To return an exact percentile value that might not be present in your dataset, use `percentileContOver` instead. 

## Syntax


```
percentileDiscOver (
     measure
   , percentile-n
   , [partition-by, …]
   , calculation-level
)
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile-n*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *partition-by*   
(Optional) One or more dimensions that you want to partition by, separated by commas. Each field in the list is enclosed in \$1 \$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation-level*   
 Specifies where to perform the calculation in relation to the order of evaluation. There are three supported calculation levels:  
+ PRE\$1FILTER
+ PRE\$1AGG
+ POST\$1AGG\$1FILTER (default) – To use this calculation level, you need to specify an aggregation on `measure`, for example `sum(measure)`.
PRE\$1FILTER and PRE\$1AGG are applied before the aggregation occurs in a visualization. For these two calculation levels, you can't specify an aggregation on `measure` in the calculated field expression. To learn more about calculation levels and when they apply, see [Order of evaluation in Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/order-of-evaluation-quicksight.html) and [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Returns


The result of the function is a number. 

## Example of percentileDiscOver
Example

The following example helps explain how percentileDiscOver works.

**Example Comparing calculation levels for the median**  
The following example shows the median for a dimension (category) by using different calculation levels with the `percentileDiscOver` function. The percentile is 50. The dataset is filtered by a region field. The code for each calculated field is as follows:  
+ `example = left( category, 1 )` (A simplified example.)
+ `pre_agg = percentileDiscOver ( {Revenue} , 50 , [ example ] , PRE_AGG)`
+ `pre_filter = percentileDiscOver ( {Revenue} , 50 , [ example ] , PRE_FILTER) `
+ `post_agg_filter = percentileDiscOver ( sum ( {Revenue} ) , 50 , [ example ], POST_AGG_FILTER )`

```
example   pre_filter     pre_agg      post_agg_filter
------------------------------------------------------
0            106,728     119,667            4,117,579
1            102,898      95,946            2,307,547
2             97,629      92,046              554,570  
3            100,867     112,585            2,709,057
4             96,416      96,649            3,598,358
5            106,293      97,296            1,875,648
6             97,118      64,395            1,320,672
7             99,915      90,557              969,807
```

**Example The median**  
The following example calculates the median (the 50th percentile) of `Sales` partitioned by `City` and `State`.   

```
percentileDiscOver
(
  Sales, 
  50,
  [City, State]
)
```
The following example calculates the 98th percentile of `sum({Billed Amount})` partitioned by `Customer Region`. The fields in the table calculation are in the field wells of the visual.  

```
percentileDiscOver
(
  sum({Billed Amount}), 
  98,
  [{Customer Region}]
)
```
The following screenshot shows the how these two examples look on a chart.   

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentilOver-50-98.png)


# percentOfTotal


The `percentOfTotal` function calculates the percentage a measure contributes to the total, based on the dimensions specified. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentOfTotal
(
     measure 
     ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the percent of total for. Currently, the `distinct count` aggregation is not supported for `percentOfTotal`.

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example creates a calculation for the percent of total `Sales` contributed by each `State`.

```
percentOfTotal
(
     sum(Sales), 
     [State]
)
```

The following example calculates the percent that a specific `Billed Amount` is when compared to the total `Billed Amount`, partitioned by (`[{Service Line} ASC]`). The fields in the table calculation are in the field wells of the visual.

```
percentOfTotal
(
     sum( {Billed Amount} ), 
     [{Service Line}]
)
```

The following screenshot shows the results of the example. The red highlights show that the partition field with the value "`Billing`" has three entries, one for each region. The total billed amount for this service line is divided into three percentages, which total 100 percent. Percentages are rounded and might not always add up to exactly 100 percent.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentOfTotal.png)


# periodOverPeriodDifference


The `periodOverPeriodDifference` function calculates the difference of a measure over two different time periods as specified by period granularity and offset. Unlike a difference calculation, this function uses a date-based offset instead of a fixed sized offset. This ensures that only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodDifference(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to perform the periodOverPeriod calculation on.

 *dateTime*   
The Date dimension over which we are computing Period-Over-Period calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The defaults value is the visual date dimension granularity.

 *offset*   
(Optional) The offset can be a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
The default value is 1.

## Example


The following example uses a calculated field `PeriodOverPeriod` to display the sales amount difference of yesterday

```
periodOverPeriodDifference(sum(Sales), {Order Date})
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthDifference.png)


The following example uses a calculated field `PeriodOverPeriod` to display the sales amount difference of previous 2 months. Below example is comparing sales of `Mar2020` with `Jan2020`.

```
periodOverPeriodDifference(sum(Sales),{Order Date}, MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthDifference2.png)


# periodOverPeriodLastValue


The `periodOverPeriodLastValue` function calculates the last (previous) value of a measure from the previous time period as specified by the period granularity and offset. This function uses a date-based offset instead of a fixed sized offset. This ensures only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodLastValue(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for.

 *date*   
The date dimension over which you're computing periodOverPeriod calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
This argument defaults to the granularity of the visual aggregation

 *offset*   
(Optional) The offset can a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
This argument default value is 1.

## Example


The following example calculates the month over month value in sales with the visual dimension granularity and default offset of 1.

```
periodOverPeriodLastValue(sum(Sales), {Order Date})
```

The following example calculates the month over month value in sales with a fixed granularity of `MONTH` and fixed offset of 1.

```
periodOverPeriodLastValue(sum(Sales), {Order Date},MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthLastValue.png)


# periodOverPeriodPercentDifference


The `periodOverPeriodPercentDifference` function calculates the percent difference of a measure over two different time periods as specified by the period granularity and offset. Unlike percentDifference, this function uses a date-based offset instead of a fixed sized offset. This ensures only the correct dates are compared, even if data points are missing in the dataset.

## Syntax


```
periodOverPeriodPercentDifference(
	measure, 
	date, 
	period, 
	offset)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the difference for.

 *date*   
The date dimension over which you're computing periodOverPeriod calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
This argument defaults to the granularity of the visual aggregation

 *offset*   
(Optional) The offset can a positive or negative integer representing the prior time period (specified by period) that you want to compare against. For instance, period of a quarter with offset 1 means comparing against the previous quarter.  
This argument default value is 1.

## Example


The following example calculates the month over month percent difference in sales with the visual dimension granularity and default offset of 1.

```
periodOverPeriodPercentDifference(sum(Sales),{Order Date})
```

The following example calculates the month over month percent difference in sales with a fixed granularity of `MONTH` and fixed offset of 1.

```
periodOverPeriodPercentDifference(sum(Sales), {Order Date}, MONTH, 1)
```

![\[This is an image of the return of the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/MonthOverMonthPercentDifference.png)


# periodToDateAvgOverTime


The `periodToDateAvgOverTime` function calculates the average of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateAvgOverTime(
	measure, 
	dateTime,
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following function calculates the average fare amount month over mont.

```
periodToDateAvgOverTime(sum({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDAvgOverTimeResults.png)


# periodToDateCountOverTime


The `periodToDateCountOverTime` function calculates the count of a dimension or measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateCountOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the count of vendors month over month.

```
periodToDateCountOverTime(count(vendorid), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDCountOverTimeResults.png)


# periodToDateMaxOverTime


The `periodToDateMaxOverTime` function calculates the maximum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateMaxOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the maximum fare amount month over month.

```
periodToDatemaxOverTime(max({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMaxOverTimeResults.png)


# periodToDateMinOverTime


The `periodToDateMinOverTime` function calculates the minimum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateMinOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following example calculates the minimum fare amount month over month.

```
periodToDateMinOverTime(min({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMinOverTimeResults.png)


# periodToDateSumOverTime


The `periodToDateSumOverTime` function calculates the sum of a measure for a given time granularity (for instance, a quarter) up to a point in time.

## Syntax


```
periodToDateSumOverTime(
	measure, 
	dateTime, 
	period)
```

## Arguments


 *measure*   
An aggregated measure that you want to do the calculation

 *dateTime*   
The date dimension over which you're computing PeriodOverTime calculations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.  
The default value is the visual's date dimension granularity.

## Example


The following function returns the total fare amount month over month.

```
periodToDateSumOverTime(sum({fare_amount}), pickupDatetime, MONTH)
```

![\[This is an image of the results of the example calculation with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDSumOverTime-example-results.png)


# stdevOver


The `stdevOver` function calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
stdevOver
(
      measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the standard deviation of `sum(Sales)`, partitioned by `City` and `State`, based on a sample..

```
stdevOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the standard deviation of `Billed Amount` over `Customer Region`, based on a sample. The fields in the table calculation are in the field wells of the visual.

```
stdevOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# stdevpOver


The `stdevpOver` function calculates the standard deviation of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
stdevpOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the standard deviation of `sum(Sales)`, partitioned by `City` and `State`, based on a biased population.

```
stdevpOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the standard deviation of `Billed Amount` over `Customer Region`, based on a biased population. The fields in the table calculation are in the field wells of the visual.

```
stdevpOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# varOver


The `varOver` function calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a sample. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
varOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the variance of `sum(Sales)`, partitioned by `City` and `State`, based on a sample.

```
varOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the variance of `Billed Amount` over `Customer Region`, based on a sample. The fields in the table calculation are in the field wells of the visual.

```
varOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# varpOver


The `varpOver` function calculates the variance of the specified measure, partitioned by the chosen attribute or attributes, based on a biased population. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
varpOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the variance of `sum(Sales)`, partitioned by `City` and `State`, based on a biased population.

```
varpOver
(
     sum(Sales), 
     [City, State]
)
```

The following example calculates the variance of `Billed Amount` over `Customer Region`, based on a biased population. The fields in the table calculation are in the field wells of the visual.

```
varpOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

# sumOver


 The `sumOver` function calculates the sum of a measure partitioned by a list of dimensions. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
sumOver
(
     measure 
     ,[ partition_field, ... ] 
     ,calculation level 
)
```

## Arguments


*measure*   
The measure that you want to do the calculation for, for example `sum({Sales Amt})`. Use an aggregation if the calculation level is set to `NULL` or `POST_AGG_FILTER`. Don't use an aggregation if the calculation level is set to `PRE_FILTER` or `PRE_AGG`.

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (default) table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example calculates the sum of `sum(Sales)`, partitioned by `City` and `State`.

```
sumOver
(
     sum(Sales), 
     [City, State]
)
```

The following example sums `Billed Amount` over `Customer Region`. The fields in the table calculation are in the field wells of the visual.

```
sumOver
(
     sum({Billed Amount}),
     [{Customer Region}]
)
```

The following screenshot shows the results of the example. With the addition of `Customer Segment`, the total amount billed for each is summed for the `Customer Region`, and displays in the calculated field.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sumOver.png)


# denseRank


The `denseRank` function calculates the rank of a measure or a dimension in comparison to the specified partitions. It counts each item only once, ignoring duplicates, and assigns a rank "without holes" so that duplicate values share the same rank. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
denseRank
(
  [ sort_order_field ASC_or_DESC, ... ] 
  ,[ partition_field, ... ] 
)
```

## Arguments


 *sort order field*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example densely ranks `max(Sales)`, based on a descending sort order, by `State` and `City`. Any cities with the same `max(Sales)` are assigned the same rank, and the next city is ranked consecutively after them. For example, if three cities share the same ranking, the fourth city is ranked as second. 

```
denseRank
(
  [max(Sales) DESC], 
  [State, City]
)
```

The following example densely ranks `max(Sales)`, based on a descending sort order, by `State`. Any states with the same `max(Sales)` are assigned the same rank, and the next is ranked consecutively after them. For example, if three states share the same ranking, the fourth state is ranked as second. 

```
denseRank
(
  [max(Sales) DESC], 
  [State]
)
```

# rank


The `rank` function calculates the rank of a measure or a dimension in comparison to the specified partitions. It counts each item, even duplicates, once and assigns a rank "with holes" to make up for duplicate values. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
rank
(
  [ sort_order_field ASC_or_DESC, ... ]
  ,[ partition_field, ... ] 
)
```

## Arguments


 *sort order field*   
One or more aggregated measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example ranks `max(Sales)`, based on a descending sort order, by `State` and `City`, within the `State` of **WA**. Any cities with the same `max(Sales)` are assigned the same rank, but the next rank includes the count of all previously existing ranks. For example, if three cities share the same ranking, the fourth city is ranked as fourth. 

```
rank
(
  [max(Sales) DESC], 
  [State, City]
)
```

The following example ranks `max(Sales)`, based on an ascending sort order, by `State`. Any states with the same `max(Sales)` are assigned the same rank, but the next rank includes the count of all previously existing ranks. For example, if three states share the same ranking, the fourth state is ranked as fourth. 

```
rank
(
  [max(Sales) ASC], 
  [State]
)
```

The following example ranks `Customer Region` by total `Billed Amount`. The fields in the table calculation are in the field wells of the visual.

```
rank(
  [sum({Billed Amount}) DESC]
)
```

The following screenshot shows the results of the example, along with the total `Billed Amount` so you can see how each region ranks.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/rankCalc.png)


# percentileRank


The `percentileRank` function calculates the percentile rank of a measure or a dimension in comparison to the specified partitions. The percentile rank value(*x*) indicates that the current item is above *x*% of values in the specified partition. The percentile rank value ranges from 0 (inclusive) to 100 (exclusive). 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
percentileRank
(
      [ sort_order_field ASC_or_DESC, ... ] 
     ,[ {partition_field}, ... ]
)
```

## Arguments


 *sort order field*   
One or more aggregated measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *calculation level*  
(Optional) Specifies the calculation level to use:  
+ **`PRE_FILTER`** – Prefilter calculations are computed before the dataset filters.
+ **`PRE_AGG`** – Preaggregate calculations are computed before applying aggregations and top and bottom *N* filters to the visuals.
+ **`POST_AGG_FILTER`** – (Default) Table calculations are computed when the visuals display. 
This value defaults to `POST_AGG_FILTER` when blank. For more information, see [Using level-aware calculations in Quick](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations.html).

## Example


The following example does a percentile ranking of `max(Sales)` in descending order, by `State`. 

```
percentileRank
(
     [max(Sales) DESC], 
     [State]
)
```

The following example does a percentile ranking of `Customer Region` by total `Billed Amount`. The fields in the table calculation are in the field wells of the visual.

```
percentileRank(
     [sum({Billed Amount}) DESC],
     [{Customer Region}]
)
```

The following screenshot shows the results of the example, along with the total `Billed Amount` so you can see how each region compares.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentileRank.png)


# runningAvg


The `runningAvg` function calculates a running average for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningAvg
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running average for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running average of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningAvg
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running average of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningAvg
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningCount


The `runningCount` function calculates a running count for a measure or dimension, based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningCount
(
  measure_or_dimension 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure or dimension*   
An aggregated measure or dimension that you want to see the running count for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running count of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningCount
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running count of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningCount
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningMax


The `runningMax` function calculates a running maximum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningMax
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running maximum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running maximum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningMax
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running maximum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningMax
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningMin


The `runningMin` function calculates a running minimum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningMin
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running minimum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running minimum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningMin
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running minimum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningMin
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

# runningSum


The `runningSum` function calculates a running sum for a measure based on the specified dimensions and sort orders. 

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions. 

```
runningSum
(
  measure 
  ,[ sortorder_field ASC_or_DESC, ... ]  
  ,[ partition_field, ... ] 
)
```

## Arguments


 *measure*   
An aggregated measure that you want to see the running sum for. 

 *sort order field*   
One or more measures and dimensions that you want to sort the data by, separated by commas. You can specify either ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

 *partition field*  
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates a running sum of `sum(Sales)`, sorted by `Sales`, partitioned by `City` and `State`.

```
runningSum
(
  sum(Sales), 
  [Sales ASC], 
  [City, State]
)
```

The following example calculates a running sum of `Billed Amount`, sorted by month (`[truncDate("MM",Date) ASC]`). The fields in the table calculation are in the field wells of the visual.

```
runningSum
(
  sum({Billed Amount}),
  [truncDate("MM",Date) ASC]
)
```

The following screenshot shows the results of the example. The red labels show how each amount is added ( `a + b = c` ) to the next amount, resulting in a new total. 

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/runningSum.png)


# firstValue


The `firstValue` function calculates the first value of the aggregated measure or dimension partitioned and sorted by specified attributes.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
firstValue
	(
	     aggregated measure or dimension, 
	     [ sort_attribute ASC_or_DESC, ... ],
	     [ partition_by_attribute, ... ] 
	)
```

## Arguments


*aggregated measure or dimension*   
An aggregated measure or dimension that you want to see the first value for.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*partition by attribute*  
(Optional) One or more measure or dimensions that you want to partition by, separated by commas.  
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets). 

## Example


The following example calculates the first `Destination Airport`, sorted by `Flight Date`, partitioned by `Flight Date` ascending and `Origin Airport`.

```
firstValue(
    {Destination Airport}
    [{Flight Date} ASC],
    [
        {Origin Airport},
        {Flight Date}
    ]
)
```

# lastValue


The `lastValue` function calculates the last value of the aggregated measure or dimension partitioned and sorted by specified attributes.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
lastValue
	(
	     aggregated measure or dimension,
	     [ sort_attribute ASC_or_DESC, ... ],
	     [ partition_by_attribute, ... ] 
	)
```

## Arguments


*aggregated measure or dimension*   
An aggregated measure or dimension that you want to see the last value for.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (`ASC`) or descending (`DESC`) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*partition by attribute*  
(Optional) One or more measures or dimensions that you want to partition by, separated by commas.  
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets). 

## Example


The following example calculates the last value for `Destination Airport`. This calculation is sorted by the `Flight Date` value and partitioned by the `Flight Date` value sorted in ascending order and the `Origin Airport` value.

```
lastValue(
    [{Destination Airport}],
    [{Flight Date} ASC],
    [
        {Origin Airport},
    	truncDate('DAY', {Flight Date})
    ]
)
```

# windowAvg


The `windowAvg` function calculates the average of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. For example, you can use `windowAvg` to calculate a moving average, which is often used to smooth out the noise in a line chart.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowAvg
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1 \$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving average of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes three rows above and two row below of the current row.

```
windowAvg
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     3,
            2
	)
```

The following screenshot shows the results of this moving average example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the moving average of revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowAvg.png)


# windowCount


The `windowCount` function calculates the count of the aggregated measure or dimension in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowCount
	(
	     measure_or_dimension 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure or dimension*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving count of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes three rows above and two row below of the current row.

```
windowCount
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     3,
               2
	)
```

# windowMax


The `windowMax` function calculates the maximum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. You can use `windowMax` to help you identify the maximum of the metric over a period time.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowMax
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it is more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the trailing 12-month maximum of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes 12 rows above and 0 row below of the current row.

```
windowMax
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     12,
               0
	)
```

The following screenshot shows the results of this trailing 12-month example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the trailing 12-month maximum revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowMax.png)


# windowMin


The `windowMin` function calculates the minimum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. You can use `windowMin` to help you identify the minimum of the metric over a period time.

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowMin
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the average for, for example `sum({Revenue})`.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the trailing 12-month minimum of `sum(Revenue)`, partitioned by `SaleDate`. The calculation includes 12 rows above and 0 row below of the current row.

```
windowMin
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     12,
               0
	)
```

The following screenshot shows the results of this trailing 12-month example. The sum(Revenue) field is added to the chart to show the difference between the revenue and the trailing 12-month minimum revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowMin.png)


# windowSum


The `windowSum` function calculates the sum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field. 

Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.

## Syntax


The brackets are required. To see which arguments are optional, see the following descriptions.

```
windowSum
	(
	     measure 
            , [sort_order_field ASC/DESC, ...]
            , start_index
            , end_index
	     ,[ partition_field, ... ] 
	)
```

## Arguments


*measure*   
The aggregated metric that you want to get the sum for, for example `sum({Revenue})`.   
For the engines MySQL, MariaDB, and Amazon Aurora with MySQL compatibility, the lookup index is limited to just 1. Window functions aren't supported for MySQL versions below 8 and MariaDB versions earlier than 10.2.

*sort attribute*   
One or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (**ASC**) or descending (**DESC**) sort order.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

*start index*   
The start index is a positive integer, indicating *n* rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

*end index*   
The end index is a positive integer, indicating *n* rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly. 

 *partition field*   
(Optional) One or more dimensions that you want to partition by, separated by commas.   
Each field in the list is enclosed in \$1\$1 (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).

## Example


The following example calculates the moving sum of `sum(Revenue)`, sorted by `SaleDate`. The calculation includes two rows above and one row ahead of the current row.

```
windowSum
	(
	     sum(Revenue), 
	     [SaleDate ASC],
	     2,
            1
	)
```

The following example show a trailing 12-month sum. 

```
windowSum(sum(Revenue),[SaleDate ASC],12,0)
```

The following screenshot shows the results of this trailing 12-month sum example. The `sum(Revenue)` field is added to the chart to show the difference between the revenue and the trailing 12-month sum of revenue.

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/windowSum.png)
