

# Aggregate functions


Aggregate functions are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition.

When a calculated field formula contains an aggregation, it becomes a custom aggregation. To make sure that your data is accurately displayed, Amazon Quick applies the following rules:
+ Custom aggregations can't contain nested aggregate functions. For example, this formula doesn't work: `sum(avg(x)/avg(y))`. However, nesting nonaggregated functions inside or outside aggregate functions does work. For example, `ceil(avg(x))` works. So does `avg(ceil(x))`.
+ Custom aggregations can't contain both aggregated and nonaggregated fields, in any combination. For example, this formula doesn't work: `Sum(sales)+quantity`.
+ Filter groups can't contain both aggregated and nonaggregated fields.
+ Custom aggregations can't be converted to a dimension. They also can't be dropped into the field well as a dimension.
+ In a pivot table, custom aggregations can't be added to table calculations.
+ Scatter plots with custom aggregations need at least one dimension under **Group/Color** in the field wells.

For more information about supported functions and operators, see [Calculated field function and operator reference for Amazon Quick](https://docs.amazonaws.cn/quicksight/latest/user/calculated-field-reference.html). 

The aggregate functions for calculated fields in Quick include the following.

**Topics**
+ [

# avg
](avg-function.md)
+ [

# avgIf
](avgIf-function.md)
+ [

# count
](count-function.md)
+ [

# countIf
](countIf-function.md)
+ [

# distinct\$1count
](distinct_count-function.md)
+ [

# distinct\$1countIf
](distinct_countIf-function.md)
+ [

# max
](max-function.md)
+ [

# maxIf
](maxIf-function.md)
+ [

# median
](median-function.md)
+ [

# medianIf
](medianIf-function.md)
+ [

# min
](min-function.md)
+ [

# minIf
](minIf-function.md)
+ [

# percentile
](percentile-function.md)
+ [

# percentileCont
](percentileCont-function.md)
+ [

# percentileDisc (percentile)
](percentileDisc-function.md)
+ [

# periodToDateAvg
](periodToDateAvg-function.md)
+ [

# periodToDateCount
](periodToDateCount-function.md)
+ [

# periodToDateMax
](periodToDateMax-function.md)
+ [

# periodToDateMedian
](periodToDateMedian-function.md)
+ [

# periodToDateMin
](periodToDateMin-function.md)
+ [

# periodToDatePercentile
](periodToDatePercentile-function.md)
+ [

# periodToDatePercentileCont
](periodToDatePercentileCont-function.md)
+ [

# periodToDateStDev
](periodToDateStDev-function.md)
+ [

# periodToDateStDevP
](periodToDateStDevP-function.md)
+ [

# periodToDateSum
](periodToDateSum-function.md)
+ [

# periodToDateVar
](periodToDateVar-function.md)
+ [

# periodToDateVarP
](periodToDateVarP-function.md)
+ [

# stdev
](stdev-function.md)
+ [

# stdevp
](stdevp-function.md)
+ [

# stdevIf
](stdevIf-function.md)
+ [

# stdevpIf
](stdevpIf-function.md)
+ [

# sum
](sum-function.md)
+ [

# sumIf
](sumIf-function.md)
+ [

# var
](var-function.md)
+ [

# varIf
](varIf-function.md)
+ [

# varp
](varp-function.md)
+ [

# varpIf
](varpIf-function.md)

# avg


The `avg` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avg(salesAmount)` returns the average for that measure grouped by the (optional) chosen dimension.

## Syntax


```
avg(decimal, [group-by level])
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the average sales.

```
avg({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
avg({Sales}, [{Country}])
```

![\[Average sales numbers are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/avg-function-example.png)


# avgIf


Based on a conditional statement, the `avgIf` function averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `avgIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the average for that measure grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
avgIf(dimension or measure, condition) 
```

## Arguments


 *decimal*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# count


The `count` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `count(product type)` returns the total number of product types grouped by the (optional) chosen dimension, including any duplicates. The `count(sales)` function returns the total number of sales completed grouped by the (optional) chosen dimension, for example salesperson.

## Syntax


```
count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example calculates the count of sales by a specified dimension in the visual. In this example, the count of sales by month are shown.

```
count({Sales})
```

![\[The count of sales by month.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the count of sales at the Country level, but not across other dimensions (Region or Product) in the visual.

```
count({Sales}, [{Country}])
```

![\[Count of sales are aggregated only at the country level.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/count-function-example2.png)


# countIf


Based on a conditional statement, the `countIf` function calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions.

## Syntax


```
countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Return type


Integer

## Example


The following function returns a count of the sales transactions (`Revenue`) that meet the conditions, including any duplicates. 

```
countIf (
    Revenue,
    # Conditions
        CalendarDay >= ${BasePeriodStartDate} AND 
        CalendarDay <= ${BasePeriodEndDate} AND 
        SourcingType <> 'Indirect'
)
```

# distinct\$1count


The `distinct_count` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_count(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_count(ship date)` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region.

## Syntax


```
distinct_count(dimension or measure, [group-by level])
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Example


The following example calculates the total number of dates when products were ordered grouped by the (optional) chosen dimension in the visual, for example region.

```
distinct_count({Order Date})
```

![\[The total number of dates when products were ordered in each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the average sales at the Country level, but not across other dimensions (Region) in the visual.

```
distinct_count({Order Date}, [Country])
```

![\[The total number of dates when products were ordered in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/distinct_count-function-example2.png)


# distinct\$1countIf


Based on a conditional statement, the `distinct_countIf` function calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. For example, `distinct_countIf(product type)` returns the total number of unique product types grouped by the (optional) chosen dimension, without any duplicates. The `distinct_countIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` function returns the total number of dates when products were shipped grouped by the (optional) chosen dimension, for example region, if the condition evaluates to true.

## Syntax


```
distinct_countIf(dimension or measure, condition)
```

## Arguments


 *dimension or measure*   
The argument must be a measure or a dimension. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# max


The `max` function returns the maximum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `max(sales goal)` returns the maximum sales goals grouped by the (optional) chosen dimension.

## Syntax


```
max(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Maximum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the max sales value for each region. It is compared to the total, minimum, and median sales values.

```
max({Sales})
```

![\[The maximum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the max sales at the Country level, but not across other dimensions (Region) in the visual.

```
max({Sales}, [Country])
```

![\[The maximum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/max-function-example2.png)


# maxIf


Based on a conditional statement, the `maxIf` function returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `maxIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the maximum sales goals grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
maxIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# median


The `median` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `median(revenue)` returns the median revenue grouped by the (optional) chosen dimension. 

## Syntax


```
median(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the median sales value for each region. It is compared to the total, maximum, and minimum sales.

```
median({Sales})
```

![\[The median sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the median sales at the Country level, but not across other dimensions (Region) in the visual.

```
median({Sales}, [Country])
```

![\[The median sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/median-function-example2.png)


# medianIf


Based on a conditional statement, the `medianIf` aggregation returns the median value of the specified measure, grouped by the chosen dimension or dimensions. For example, `medianIf(Revenue,SaleDate >= ${BasePeriodStartDate} AND SaleDate <= ${BasePeriodEndDate})` returns the median revenue grouped by the (optional) chosen dimension, if the condition evaluates to true. 

## Syntax


```
medianIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# min


The `min` function returns the minimum value of the specified measure or date, grouped by the chosen dimension or dimensions. For example, `min(return rate)` returns the minimum rate of returns grouped by the (optional) chosen dimension.

## Syntax


```
min(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure or a date. Null values are omitted from the results. Literal values don't work. The argument must be a field.  
Minimum dates work only in the **Value** field well of tables and pivot tables. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the minimum sales value for each region. It is compared to the total, max, and median sales.

```
min({Sales})
```

![\[The minimum sales value for each region.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-max-median-function-example.png)


You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the minimum sales at the Country level, but not across other dimensions (Region) in the visual.

```
min({Sales}, [Country])
```

![\[The minimum sales value in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/min-function-example2.png)


# minIf


Based on a conditional statement, the `minIf` function returns the minimum value of the specified measure, grouped by the chosen dimension or dimensions. For example, `minIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the minimum rate of returns grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
minIf(measure, condition)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# percentile


The `percentile` function calculates the percentile of the values in measure, grouped by the dimension that's in the field well. There are two varieties of percentile calculation available in Quick:
+ [https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html](https://docs.amazonaws.cn/quicksight/latest/user/percentileCont-function.html) uses linear interpolation to determine result.
+ [percentileDisc (percentile)](https://docs.amazonaws.cn/quicksight/latest/user/percentileDisc-function.html) uses actual values to determine result. 

The `percentile` function is an alias of `percentileDisc`.

# percentileCont


The `percentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure. It uses the grouping and sorting that are applied in the field wells. It answers questions like: What values are representative of this percentile? To return an exact percentile value that might not be present in your dataset, use `percentileCont`. To return the nearest percentile value that is present in your dataset, use `percentileDisc` instead.

## Syntax


```
percentileCont(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


The `percentileCont` function calculates a result based on a continuous distribution of the values from a specified measure. The result is computed by linear interpolation between the values after ordering them based on settings in the visual. It's different from `percentileDisc`, which simply returns a value from the set of values that are aggregated over. The result from `percentileCont` might or might not exist in the values from the specified measure.

## Examples of percentileCont
Examples

The following examples help explain how percentileCont works.

**Example Comparing median, `percentileCont`, and `percentileDisc`**  
The following example shows the median for a dimension (category) by using the `median`, `percentileCont`, and `percentileDisc` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays a value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
  example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileCont` values for the `example` field. The calculated fields `n%Cont` are defined as `percentileCont( {example} ,n)`. The interpolated values in each column represent the numbers that fall into that percentile bucket. In some cases, the actual data values match the interpolated values. For example, the column `100%Cont` shows the same value for every row because 6783.02 is the highest number.  

```
 example      50%Cont     75%Cont      99%Cont    100%Cont  
 --------- ----------- ----------- ------------ ----------- 

 A             20.97       84.307      699.99      6783.02  
 B             20.99       88.84       880.98      6783.02  
 C             20.99       90.48       842.925     6783.02  
 D             21.38       85.99       808.49      6783.02
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentileCont({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentileCont-function-example-lac.png)


# percentileDisc (percentile)


The `percentileDisc` function calculates the percentile based on the actual numbers in `measure`. It uses the grouping and sorting that are applied in the field wells. The `percentile` function is an alias of `percentileDisc`.

Use this function to answer the following question: Which actual data points are present in this percentile? To return the nearest percentile value that is present in your dataset, use `percentileDisc`. To return an exact percentile value that might not be present in your dataset, use `percentileCont` instead. 

## Syntax


```
percentileDisc(expression, percentile, [group-by level])
```

## Arguments


 *measure*   
Specifies a numeric value to use to compute the percentile. The argument must be a measure or metric. Nulls are ignored in the calculation. 

 *percentile*   
The percentile value can be any numeric constant 0–100. A percentile value of 50 computes the median value of the measure. 

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Returns


The result of the function is a number. 

## Usage notes


`percentileDisc` is an inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification and returns an element from the given set. 

For a given percentile value `P`, `percentileDisc` uses the sorted values in the visual and returns the value with the smallest cumulative distribution value that is greater than or equal to `P`. 

## Examples of percentileDisc
Examples

The following examples help explain how percentileDisc works.

**Example Comparing median, `percentileDisc`, and `percentileCont`**  
The following example shows the median for a dimension (category) by using the `percentileCont`, and `percentileDisc`, and `median` functions. The median value is the same as the percentileCont value. `percentileCont` interpolates a value, which might or might not be in the data set. However, because `percentileDisc` always displays the closest value that exists in the dataset, the two results might not match. The last column in this example shows the difference between the two values. The code for each calculated field is as follows:  
+ `50%Cont = percentileCont( example , 50 )`
+ `median = median( example )`
+ `50%Disc = percentileDisc( example , 50 )`
+ `Cont-Disc = percentileCont( example , 50 ) − percentileDisc( example , 50 )`
+ `example = left( category, 1 )` (To make a simpler example, we used this expression to shorten the names of categories down to their first letter.)

```
 example     median       50%Cont      50%Disc      Cont-Disc
 -------- ----------- ------------ -------------- ------------ 
 A          22.48          22.48          22.24          0.24
 B          20.96          20.96          20.95          0.01
 C          24.92          24.92          24.92          0
 D          24.935         24.935         24.92          0.015
 E          14.48          14.48          13.99          0.49
```

**Example 100th percentile as maximum**  
The following example shows a variety of `percentileDisc` values for the `example` field. The calculated fields `n%Disc` are defined as `percentileDisc( {example} ,n)`. The values in each column are actual numbers that come from the dataset.   

```
 example     50%Disc      75%Disc        99%Disc      100%Disc
 -------- ----------- ------------ -------------- ------------ 
 A            20.97        73.98         699.99       6783.02
 B            42.19        88.84         820.08       6783.02
 C            30.52        90.48         733.44       6783.02
 D            41.38        85.99         901.29       6783.0
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the 30th percentile based on a continuous distribution of the numbers at the Country level, but not across other dimensions (Region) in the visual.

```
percentile({Sales}, 30, [Country])
```

![\[The percentile of sales in each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/percentile-function-example-lac.png)


# periodToDateAvg


The `periodToDateAvg` function averages the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateAvg(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateAvg(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDAvgResults.png)


# periodToDateCount


The `periodToDateCount` function calculates the number of values in a dimension or measure, including duplicates, for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateCount(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateCount(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDCountResults.png)


# periodToDateMax


The `periodToDateMax` function returns the maximum value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that point.

## Syntax


```
periodToDateMax(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMax(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMaxResults.png)


# periodToDateMedian


The `periodToDateMedian` function returns the median value of the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMedian(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMedian(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMedianResults.png)


# periodToDateMin


The `periodToDateMin` function returns the minimum value of the specified measure or date, or a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateMin(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateMin(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDMinResults.png)


# periodToDatePercentile


The `periodToDatePercentile` function calculates the percentile based on the actual numbers in measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period. It uses the grouping and sorting that are applied in the field wells.

To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile`. To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont` instead.

## Syntax


```
periodToDatePercentile(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example. that is 06-27-21.

```
periodToDatePercentile(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDPercentileResults.png)


# periodToDatePercentileCont


The `periodToDatePercentileCont` function calculates percentile based on a continuous distribution of the numbers in the measure for a given time granularity (for instance, a quarter) up to a point in time in that period. It uses the grouping and sorting that are applied in the field wells.

To return an exact percentile value that might not be present in your dataset, use `periodToDatePercentileCont`. To return the nearest percentile value that is present in your dataset, use `periodToDatePercentile` instead.

## Syntax


```
periodToDatePercentileCont(
	measure, 
	percentile, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *percentile*   
The percentile value can be any numeric constant 0-100. A percentile of 50 computes the median value of the measure.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date, 90th percentile of fare amount per payment type for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDatePercentileCont(fare_amount, 90, pickupDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the return from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDContPercentileResults.png)


# periodToDateStDev


The `periodToDateStDev` function calculates the standard deviation of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, based on a sample and relative to that period.

## Syntax


```
periodToDateStDev(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
(Optional) The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDev(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevResults.png)


# periodToDateStDevP


The `periodToDateStDevP` function calculates the population standard deviation of the set of numbers in the specified measure, for a given time granularity (for instance, a quarter) up to a point in time, based on a sample in that period.

## Syntax


```
periodToDateStDevP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateStDevP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDStDevPResults.png)


# periodToDateSum


The `periodToDateSum` function adds the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relative to that period.

## Syntax


```
periodToDateSum(
	measure, 
	dateTime, 
	period, 
	endDate)
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following function calculates the week to date sum of fare amount per payment, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateSum(fare_amount, pickUpDateTime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results for the example, with illustrations.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDSumResults.png)


# periodToDateVar


The `periodToDateVar` function calculates the sample variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time in that period.

## Syntax


```
periodToDateVar(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVar(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarResults.png)


# periodToDateVarP


The `periodToDateVarP` function calculates the population variance of the set of numbers in the specified measure for a given time granularity (for instance, a quarter) up to a point in time, relevant to that period.

## Syntax


```
periodToDateVarP(
	measure, 
	dateTime, 
	period, 
	endDate (optional))
```

## Arguments


 *measure*   
The argument must be a field. Null values are omitted from the results. Literal values don't work.

 *dateTime*   
The Date dimension over which you're computing PeriodToDate aggregations.

 *period*   
The time period across which you're computing the computation. Granularity of `YEAR` means `YearToDate` computation, `Quarter` means `QuarterToDate`, and so on. Valid granularities include `YEAR`, `QUARTER`, `MONTH`, `WEEK`, `DAY`, `HOUR`, `MINUTE`, and `SECONDS`.

 *endDate*   
(Optional) The date dimension that you're ending computing periodToDate aggregations. It defaults to `now()` if omitted.

## Example


The following example calculates the week-to-date minimum fare amount per payment type, for the week of 06-30-21. For simplicity in the example, we filtered out only a single payment. 06-30-21 is Wednesday. Quick begins the week on Sundays. In our example, that is 06-27-21.

```
periodToDateVarP(fare_amount, pickUpDatetime, WEEK, parseDate("06-30-2021", "MM-dd-yyyy"))
```

![\[This is an image of the results from the example calculation.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/PTDVarPResults.png)


# stdev


The `stdev` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
stdev(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class, using a sample of the test scores recorded.

```
stdev({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
stdev({Score}, [Subject])
```

# stdevp


The `stdevp` function calculates the population standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
stdevp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the standard deviation of test scores for a class using all the scores recorded.

```
stdevp({Score})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the standard deviation of test scores at the subject level, but not across other dimensions (Class) in the visual using all the scores recorded.

```
stdevp({Score}, [Subject])
```

# stdevIf


Based on a conditional statement, the `stdevIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample. 

## Syntax


```
stdevIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# stdevpIf


Based on a conditional statement, the `stdevpIf` function calculates the standard deviation of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
stdevpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# sum


The `sum` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sum(profit amount)` returns the total profit amount grouped by the (optional) chosen dimension.

## Syntax


```
sum(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the sum of sales.

```
sum({Sales})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example calculates the sum of sales at the Country level, but not across other dimensions (Region and Product) in the visual.

```
sum(Sales, [Country])
```

![\[The sum of sales for each country.\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sum-function-example.png)


# sumIf


Based on a conditional statement, the `sumIf` function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, `sumIf(ProdRev,CalendarDay >= ${BasePeriodStartDate} AND CalendarDay <= ${BasePeriodEndDate} AND SourcingType <> 'Indirect')` returns the total profit amount grouped by the (optional) chosen dimension, if the condition evaluates to true.

## Syntax


```
sumIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

## Examples


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB`.

```
sumIf(Sales, Segment=’SMB’)
```

![\[alt text not found\]](http://docs.amazonaws.cn/en_us/quick/latest/userguide/images/sumIfCalc.png)


The following example uses a calculated field with `sumIf` to display the sales amount if `Segment` is equal to `SMB` and `Order Date` greater than year 2022.

```
sumIf(Sales, Segment=’SMB’ AND {Order Date} >=’2022-01-01’)
```

# var


The `var` function calculates the sample variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
var(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a sample of test scores.

```
var({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a sample of test scores at the subject level, but not across other dimensions (Class) in the visual.

```
var({Scores}, [Subject]
```

# varIf


Based on a conditional statement, the `varIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a sample.

## Syntax


```
varIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.

# varp


The `varp` function calculates the population variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions.

## Syntax


```
varp(measure, [group-by level])
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *group-by level*   
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.  
The argument must be a dimension field. The group-by level must be enclosed in square brackets `[ ]`. For more information, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html).

## Examples


The following example returns the variance of a population of test scores.

```
varp({Scores})
```

You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see [Level-aware calculation - aggregate (LAC-A) functions](https://docs.amazonaws.cn/quicksight/latest/user/level-aware-calculations-aggregate.html). The following example returns the variance of a population test scores at the subject level, but not across other dimensions (Class) in the visual.

```
varp({Scores}, [Subject]
```

# varpIf


Based on a conditional statement, the `varpIf` function calculates the variance of the set of numbers in the specified measure, grouped by the chosen dimension or dimensions, based on a biased population.

## Syntax


```
varpIf(measure, conditions)
```

## Arguments


 *measure*   
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.

 *condition*   
One or more conditions in a single statement.