

# Develop and run your own IDT test suites


<a name="idt-byotc-idt"></a>Starting in IDT v4.0.0, IDT for FreeRTOS combines a standardized configuration setup and result format with a test suite environment that enables you to develop custom test suites for your devices and device software. You can add custom tests for your own internal validation or provide them to your customers for device verification.

Use IDT to develop and run custom test suites, as follows:

****To develop custom test suites****  
+ Create test suites with custom test logic for the device that you want to test.
+ Provide IDT with your custom test suites to test runners. Include information about specific settings configurations for your test suites.

****To run custom test suites****  
+ Set up the device that you want to test.
+ Implement the settings configurations as required by the test suites that you want to use.
+ Use IDT to run your custom test suites.
+ View the test results and execution logs for the tests run by IDT.

## Download latest version of Amazon IoT Device Tester for FreeRTOS
Download the latest version of IDT for FreeRTOS

Download the [latest version](dev-test-versions-afr.md#idt-latest-version-afr) of IDT and extract the software into a location on your file system where you have read and write permissions. 

**Note**  
<a name="unzip-package-to-local-drive"></a>IDT does not support being run by multiple users from a shared location, such as an NFS directory or a Windows network shared folder. We recommend that you extract the IDT package to a local drive and run the IDT binary on your local workstation.  
Windows has a path length limitation of 260 characters. If you are using Windows, extract IDT to a root directory like `C:\ ` or `D:\` to keep your paths under the 260 character limit.

## Test suite workflow


Test suites are composed of three types of files:
+ Configuration files that provide IDT with information on how to execute the test suite.
+ Test executable files that IDT uses to run test cases.
+ Additional files required to run tests.

Complete the following basic steps to create custom IDT tests:

1. [Create configuration files](idt-json-config.md) for your test suite.

1. [Create test case executables](test-executables.md) that contain the test logic for your test suite. 

1. Verify and document the [configuration information required for test runners](set-config-custom.md) to run the test suite.

1. Verify that IDT can run your test suite and produce [test results](run-tests-custom.md) as expected.

To quickly build a sample custom suite and run it, follow the instructions in [Tutorial: Build and run the sample IDT test suite](build-sample-suite.md). 

To get started creating a custom test suite in Python, see [Tutorial: Develop a simple IDT test suite](create-custom-tests.md).

# Tutorial: Build and run the sample IDT test suite


The Amazon IoT Device Tester download includes the source code for a sample test suite. You can complete this tutorial to build and run the sample test suite to understand how you can use Amazon IoT Device Tester for FreeRTOS to run custom test suites. Although this tutorial uses SSH, it is useful to learn how to use Amazon IoT Device Tester with FreeRTOS devices.

 In this tutorial, you will complete the following steps: 

1. [Build the sample test suite](build-sample.md)

1. [Use IDT to run the sample test suite](run-sample.md)

**Topics**
+ [

# Set up the prerequisites for the sample test suite
](prereqs-tutorial-sample.md)
+ [

# Configure device information for IDT
](configure-idt-sample.md)
+ [

# Build the sample test suite
](build-sample.md)
+ [

# Use IDT to run the sample test suite
](run-sample.md)
+ [

# Troubleshoot errors
](tutorial-troubleshooting-custom.md)

# Set up the prerequisites for the sample test suite


To complete this tutorial, you need the following: 
+ 

  **Host computer requirements**
  + Latest version of Amazon IoT Device Tester
  + [Python](https://docs.python.org/3/) 3.7 or later

    To check the version of Python installed on your computer, run the following command:

    ```
    python3 --version
    ```

    On Windows, if using this command returns an error, then use `python --version` instead. If the returned version number is 3.7 or greater, then run the following command in a Powershell terminal to set `python3` as an alias for your `python` command. 

    ```
    Set-Alias -Name "python3" -Value "python"
    ```

    If no version information is returned or if the version number is less than 3.7, follow the instructions in [Downloading Python](https://wiki.python.org/moin/BeginnersGuide/Download) to install Python 3.7\$1. For more information, see the [Python documentation](https://docs.python.org/3/).
  + [urllib3](https://urllib3.readthedocs.io/en/latest/)

    To verify that `urllib3` is installed correctly, run the following command:

    ```
    python3 -c 'import urllib3'
    ```

    If `urllib3` is not installed, run the following command to install it:

    ```
    python3 -m pip install urllib3
    ```
+ 

  **Device requirements**
  + A device with a Linux operating system and a network connection to the same network as your host computer. 

    We recommend that you use a [Raspberry Pi](https://www.raspberrypi.org/) with Raspberry Pi OS. Make sure you set up [SSH](https://www.raspberrypi.com/documentation/computers/remote-access.html) on your Raspberry Pi to remotely connect to it.

# Configure device information for IDT


Configure your device information for IDT to run the test. You must update the `device.json` template located in the `<device-tester-extract-location>/configs` folder with the following information.

```
[
  {
    "id": "pool",
    "sku": "N/A",
    "devices": [
      {
        "id": "<device-id>",
        "connectivity": {
          "protocol": "ssh",
          "ip": "<ip-address>",
          "port": "<port>",
          "auth": {
            "method": "pki | password",
            "credentials": {
              "user": "<user-name>",
              "privKeyPath": "/path/to/private/key",
              "password": "<password>"
            }
          }
        }
      }
    ]
  }
]
```

In the `devices` object, provide the following information:

**`id`**  
A user-defined unique identifier for your device.

**`connectivity.ip`**  
The IP address of your device.

**`connectivity.port`**  
Optional. The port number to use for SSH connections to your device.

**`connectivity.auth`**  
Authentication information for the connection.  
This property applies only if `connectivity.protocol` is set to `ssh`.    
**`connectivity.auth.method`**  
The authentication method used to access a device over the given connectivity protocol.  
Supported values are:  
+ `pki`
+ `password`  
**`connectivity.auth.credentials`**  
The credentials used for authentication.    
**`connectivity.auth.credentials.user`**  
The user name used to sign in to your device.  
**`connectivity.auth.credentials.privKeyPath`**  
The full path to the private key used to sign in to your device.  
This value applies only if `connectivity.auth.method` is set to `pki`.  
**`devices.connectivity.auth.credentials.password`**  
The password used for signing in to your device.  
This value applies only if `connectivity.auth.method` is set to `password`.

**Note**  
Specify `privKeyPath` only if `method` is set to `pki`.  
Specify `password` only if `method` is set to `password`.

# Build the sample test suite


The `<device-tester-extract-location>/samples/python` folder contains sample configuration files, source code, and the IDT Client SDK that you can combine into a test suite using the provided build scripts. The following directory tree shows the location of these sample files:

```
<device-tester-extract-location>
├── ...
├── tests
├── samples
│   ├── ...
│   └── python
│       ├── configuration
│       ├── src
│       └── build-scripts
│           ├── build.sh
│           └── build.ps1
└── sdks
    ├── ...
    └── python
        └── idt_client
```

To build the test suite, run the following commands on your host computer:

------
#### [ Windows ]

```
cd <device-tester-extract-location>/samples/python/build-scripts
./build.ps1
```

------
#### [ Linux, macOS, or UNIX ]

```
cd <device-tester-extract-location>/samples/python/build-scripts
./build.sh
```

------

This creates the sample test suite in the `IDTSampleSuitePython_1.0.0` folder within the `<device-tester-extract-location>/tests` folder. Review the files in the `IDTSampleSuitePython_1.0.0` folder to understand how the sample test suite is structured and to see various examples of test case executables and test configuration files. 

**Note**  
The sample test suite includes python source code. Do not include sensitive information in your test suite code.

Next step: Use IDT to [run the sample test suite](run-sample.md) that you created.

# Use IDT to run the sample test suite


To run the sample test suite, run the following commands on your host computer: 

```
cd <device-tester-extract-location>/bin
./devicetester_[linux | mac | win_x86-64] run-suite --suite-id IDTSampleSuitePython
```

IDT runs the sample test suite and streams the results to the console. When the test has finished running, you see the following information:

```
========== Test Summary ==========
Execution Time:         5s
Tests Completed:        4
Tests Passed:           4
Tests Failed:           0
Tests Skipped:          0
----------------------------------
Test Groups:
    sample_group:       PASSED
----------------------------------
Path to Amazon IoT Device Tester Report: /path/to/devicetester/results/87e673c6-1226-11eb-9269-8c8590419f30/awsiotdevicetester_report.xml
Path to Test Execution Logs: /path/to/devicetester/results/87e673c6-1226-11eb-9269-8c8590419f30/logs
Path to Aggregated JUnit Report: /path/to/devicetester/results/87e673c6-1226-11eb-9269-8c8590419f30/IDTSampleSuitePython_Report.xml
```

# Troubleshoot errors


Use the following information to help resolve any issues with completing the tutorial.

**Test case does not run successfully**
+ If the test does not run successfully, IDT streams the error logs to the console that can help you troubleshoot the test run. Make sure that you meet all the [prerequisites](prereqs-tutorial-sample.md) for this tutorial. 

**Cannot connect to the device under test**

Verify the following:
+ Your `device.json` file contains the correct IP address, port, and authentication information.
+ You can connect to your device over SSH from your host computer.

# Tutorial: Develop a simple IDT test suite


A test suite combines the following:
+ Test executable that contain the test logic
+ Configuration files that describe the test suite

This tutorial shows you how to use IDT for FreeRTOS to develop a Python test suite that contains a single test case. Although this tutorial uses SSH, it is useful to learn how to use Amazon IoT Device Tester with FreeRTOS devices.

In this tutorial, you will complete the following steps: 

1. [Create a test suite directory](test-suite-dir.md)

1. [Create configuration files](test-suite-json.md)

1. [Create the test case executable](test-suite-exe.md)

1. [Run the test suite](run-test-suite.md)

Follow the steps below to complete a tutorial for developing a simple IDT test suite.

**Topics**
+ [

# Set up the prerequisites for a simple IDT test suite
](prereqs-tutorial-custom.md)
+ [

# Create a test suite directory
](test-suite-dir.md)
+ [

# Create configuration files
](test-suite-json.md)
+ [

# Get the IDT client SDK
](add-idt-sdk.md)
+ [

# Create the test case executable
](test-suite-exe.md)
+ [

# Configure device information for IDT
](configure-idt-sample2.md)
+ [

# Run the test suite
](run-test-suite.md)
+ [

# Troubleshoot errors
](tutorial-troubleshooting.md)
+ [

# Create IDT test suite configuration files
](idt-json-config.md)
+ [

# Configure the IDT test orchestrator
](idt-test-orchestrator.md)
+ [

# Configure the IDT state machine
](idt-state-machine.md)
+ [

# Create IDT test case executable
](test-executables.md)
+ [

# Use the IDT context
](idt-context.md)
+ [

# Configure settings for test runners
](set-config-custom.md)
+ [

# Debug and run custom test suites
](run-tests-custom.md)
+ [

# Review IDT test results and logs
](idt-review-results-logs.md)
+ [

# Submit IDT usage metrics
](idt-usage-metrics.md)

# Set up the prerequisites for a simple IDT test suite


To complete this tutorial, you need the following: 
+ 

  **Host computer requirements**
  + Latest version of Amazon IoT Device Tester
  + [Python](https://www.python.org/downloads/) 3.7 or later

    To check the version of Python installed on your computer, run the following command:

    ```
    python3 --version
    ```

    On Windows, if using this command returns an error, then use `python --version` instead. If the returned version number is 3.7 or greater, then run the following command in a Powershell terminal to set `python3` as an alias for your `python` command. 

    ```
    Set-Alias -Name "python3" -Value "python"
    ```

    If no version information is returned or if the version number is less than 3.7, follow the instructions in [Downloading Python](https://wiki.python.org/moin/BeginnersGuide/Download) to install Python 3.7\$1. For more information, see the [Python documentation](https://docs.python.org/3/).
  + [urllib3](https://urllib3.readthedocs.io/en/latest/)

    To verify that `urllib3` is installed correctly, run the following command:

    ```
    python3 -c 'import urllib3'
    ```

    If `urllib3` is not installed, run the following command to install it:

    ```
    python3 -m pip install urllib3
    ```
+ 

  **Device requirements**
  + A device with a Linux operating system and a network connection to the same network as your host computer. 

    We recommend that you use a [Raspberry Pi](https://www.raspberrypi.org/) with Raspberry Pi OS. Make sure you set up [SSH](https://www.raspberrypi.com/documentation/computers/remote-access.html) on your Raspberry Pi to remotely connect to it.

# Create a test suite directory


IDT logically separates test cases into test groups within each test suite. Each test case must be inside a test group. For this tutorial, create a folder called `MyTestSuite_1.0.0` and create the following directory tree within this folder:

```
MyTestSuite_1.0.0
└── suite
    └── myTestGroup
        └── myTestCase
```

# Create configuration files


Your test suite must contain the following required [configuration files](idt-json-config.md):

**Required files**

**`suite.json`**  
Contains information about the test suite. See [Configure suite.json](idt-json-config.md#suite-json).

**`group.json`**  
Contains information about a test group. You must create a `group.json` file for each test group in your test suite. See [Configure group.json](idt-json-config.md#group-json).

**`test.json`**  
Contains information about a test case. You must create a `test.json` file for each test case in your test suite. See [Configure test.json](idt-json-config.md#test-json).

1. In the `MyTestSuite_1.0.0/suite` folder, create a `suite.json` file with the following structure:

   ```
   {
       "id": "MyTestSuite_1.0.0",
       "title": "My Test Suite",
       "details": "This is my test suite.",
       "userDataRequired": false
   }
   ```

1. In the `MyTestSuite_1.0.0/myTestGroup` folder, create a `group.json` file with the following structure:

   ```
   {
       "id": "MyTestGroup",
       "title": "My Test Group",
       "details": "This is my test group.",
       "optional": false
   }
   ```

1. In the `MyTestSuite_1.0.0/myTestGroup/myTestCase` folder, create a `test.json` file with the following structure:

   ```
   {
       "id": "MyTestCase",
       "title": "My Test Case",
       "details": "This is my test case.",
       "execution": {
           "timeout": 300000,
           "linux": {
               "cmd": "python3",
               "args": [
                   "myTestCase.py"
               ]
           },
           "mac": {
               "cmd": "python3",
               "args": [
                   "myTestCase.py"
               ]
           },
           "win": {
               "cmd": "python3",
               "args": [
                   "myTestCase.py"
               ]
           }
       }
   }
   ```

The directory tree for your `MyTestSuite_1.0.0` folder should now look like the following:

```
MyTestSuite_1.0.0
└── suite
    ├── suite.json
    └── myTestGroup
        ├── group.json
        └── myTestCase
            └── test.json
```

# Get the IDT client SDK


You use the [IDT client SDK](test-executables.md#idt-client-sdk) to enable IDT to interact with the device under test and to report test results. For this tutorial, you will use the Python version of the SDK. 

From the `<device-tester-extract-location>/sdks/python/` folder, copy the `idt_client` folder to your `MyTestSuite_1.0.0/suite/myTestGroup/myTestCase` folder. 

To verify that the SDK was successfully copied, run the following command.

```
cd MyTestSuite_1.0.0/suite/myTestGroup/myTestCase
python3 -c 'import idt_client'
```

# Create the test case executable


Test case executables contain the test logic that you want to run. A test suite can contain multiple test case executables. For this tutorial, you will create only one test case executable.

1. Create the test suite file.

   In the `MyTestSuite_1.0.0/suite/myTestGroup/myTestCase` folder, create a `myTestCase.py` file with the following content:

   ```
   from idt_client import *
   
   def main():
       # Use the client SDK to communicate with IDT
       client = Client()
   
   if __name__ == "__main__":
       main()
   ```

1. Use client SDK functions to add the following test logic to your `myTestCase.py` file:

   1. Run an SSH command on the device under test.

      ```
      from idt_client import *
      
      def main():
          # Use the client SDK to communicate with IDT
          client = Client()
          
          # Create an execute on device request
          exec_req = ExecuteOnDeviceRequest(ExecuteOnDeviceCommand("echo 'hello world'"))
          
          # Run the command
          exec_resp = client.execute_on_device(exec_req)
          
          # Print the standard output
          print(exec_resp.stdout)
      
      if __name__ == "__main__":
          main()
      ```

   1. Send the test result to IDT.

      ```
      from idt_client import *
      
      def main():
          # Use the client SDK to communicate with IDT
          client = Client()
          
          # Create an execute on device request
          exec_req = ExecuteOnDeviceRequest(ExecuteOnDeviceCommand("echo 'hello world'"))
          
          # Run the command
          exec_resp = client.execute_on_device(exec_req)
          
          # Print the standard output
          print(exec_resp.stdout)
      
          # Create a send result request
          sr_req = SendResultRequest(TestResult(passed=True))
           
          # Send the result
          client.send_result(sr_req)
             
      if __name__ == "__main__":
          main()
      ```

# Configure device information for IDT


Configure your device information for IDT to run the test. You must update the `device.json` template located in the `<device-tester-extract-location>/configs` folder with the following information.

```
[
  {
    "id": "pool",
    "sku": "N/A",
    "devices": [
      {
        "id": "<device-id>",
        "connectivity": {
          "protocol": "ssh",
          "ip": "<ip-address>",
          "port": "<port>",
          "auth": {
            "method": "pki | password",
            "credentials": {
              "user": "<user-name>",
              "privKeyPath": "/path/to/private/key",
              "password": "<password>"
            }
          }
        }
      }
    ]
  }
]
```

In the `devices` object, provide the following information:

**`id`**  
A user-defined unique identifier for your device.

**`connectivity.ip`**  
The IP address of your device.

**`connectivity.port`**  
Optional. The port number to use for SSH connections to your device.

**`connectivity.auth`**  
Authentication information for the connection.  
This property applies only if `connectivity.protocol` is set to `ssh`.    
**`connectivity.auth.method`**  
The authentication method used to access a device over the given connectivity protocol.  
Supported values are:  
+ `pki`
+ `password`  
**`connectivity.auth.credentials`**  
The credentials used for authentication.    
**`connectivity.auth.credentials.user`**  
The user name used to sign in to your device.  
**`connectivity.auth.credentials.privKeyPath`**  
The full path to the private key used to sign in to your device.  
This value applies only if `connectivity.auth.method` is set to `pki`.  
**`devices.connectivity.auth.credentials.password`**  
The password used for signing in to your device.  
This value applies only if `connectivity.auth.method` is set to `password`.

**Note**  
Specify `privKeyPath` only if `method` is set to `pki`.  
Specify `password` only if `method` is set to `password`.

# Run the test suite


After you create your test suite, you want to make sure that it functions as expected. Complete the following steps to run the test suite with your existing device pool to do so.

1. Copy your `MyTestSuite_1.0.0` folder into `<device-tester-extract-location>/tests`.

1. Run the following commands:

   ```
   cd <device-tester-extract-location>/bin
   ./devicetester_[linux | mac | win_x86-64] run-suite --suite-id MyTestSuite
   ```

IDT runs your test suite and streams the results to the console. When the test has finished running, you see the following information:

```
time="2020-10-19T09:24:47-07:00" level=info msg=Using pool: pool
time="2020-10-19T09:24:47-07:00" level=info msg=Using test suite "MyTestSuite_1.0.0" for execution
time="2020-10-19T09:24:47-07:00" level=info msg=b'hello world\n' suiteId=MyTestSuite groupId=myTestGroup testCaseId=myTestCase deviceId=my-device executionId=9a52f362-1227-11eb-86c9-8c8590419f30
time="2020-10-19T09:24:47-07:00" level=info msg=All tests finished. executionId=9a52f362-1227-11eb-86c9-8c8590419f30
time="2020-10-19T09:24:48-07:00" level=info msg=

========== Test Summary ==========
Execution Time:         1s
Tests Completed:        1
Tests Passed:           1
Tests Failed:           0
Tests Skipped:          0
----------------------------------
Test Groups:
    myTestGroup:        PASSED
----------------------------------
Path to Amazon IoT Device Tester Report: /path/to/devicetester/results/9a52f362-1227-11eb-86c9-8c8590419f30/awsiotdevicetester_report.xml
Path to Test Execution Logs: /path/to/devicetester/results/9a52f362-1227-11eb-86c9-8c8590419f30/logs
Path to Aggregated JUnit Report: /path/to/devicetester/results/9a52f362-1227-11eb-86c9-8c8590419f30/MyTestSuite_Report.xml
```

# Troubleshoot errors


Use the following information to help resolve any issues with completing the tutorial.

**Test case does not run successfully**

If the test does not run successfully, IDT streams the error logs to the console that can help you troubleshoot the test run. Before you check the error logs, verify the following:
+ The IDT client SDK is in the correct folder as described in [Get the IDT client SDK](add-idt-sdk.md).
+ You meet all the prerequisites for this tutorial. For more information, see [Set up the prerequisites for a simple IDT test suite](prereqs-tutorial-custom.md).

**Cannot connect to the device under test**

Verify the following:
+ Your `device.json` file contains the correct IP address, port, and authentication information.
+ You can connect to your device over SSH from your host computer.

# Create IDT test suite configuration files


This section describes the formats in which you create configuration files that you include when you write a custom test suite.

**Required configuration files**

**`suite.json`**  
Contains information about the test suite. See [Configure suite.json](#suite-json).

**`group.json`**  
Contains information about a test group. You must create a `group.json` file for each test group in your test suite. See [Configure group.json](#group-json).

**`test.json`**  
Contains information about a test case. You must create a `test.json` file for each test case in your test suite. See [Configure test.json](#test-json).

**Optional configuration files**

**`test_orchestrator.yaml` or `state_machine.json`**  
Defines how tests are run when IDT runs the test suite. SSe [Configure test\$1orchestrator.yaml](#test-orchestrator-config).  
Starting in IDT v4.5.2, you use the `test_orchestrator.yaml` file to define the test workflow. In previous versions of IDT, you use the `state_machine.json` file. For information about the state machine, see [Configure the IDT state machine](idt-state-machine.md).

**`userdata_schema.json`**  
Defines the schema for the [`userdata.json` file](set-config-custom.md#userdata-config-custom) that test runners can include in their setting configuration. The `userdata.json` file is used for any additional configuration information that is required to run the test but is not present in the `device.json` file. See [Configure userdata\$1schema.json](#userdata-schema-json).

Configuration files are placed in your `<custom-test-suite-folder>` as shown here.

```
<custom-test-suite-folder>
└── suite
    ├── suite.json
    ├── test_orchestrator.yaml
    ├── userdata_schema.json
    ├── <test-group-folder>
        ├── group.json
        ├── <test-case-folder>
            └── test.json
```

## Configure suite.json


The `suite.json` file sets environment variables and determines whether user data is required to run the test suite. Use the following template to configure your `<custom-test-suite-folder>/suite/suite.json` file: 

```
{
    "id": "<suite-name>_<suite-version>",
    "title": "<suite-title>",
    "details": "<suite-details>",
    "userDataRequired": true | false,
    "environmentVariables": [
        {
            "key": "<name>",
            "value": "<value>",
        },
        ...
        {
            "key": "<name>",
            "value": "<value>",
        }
    ]
}
```

All fields that contain values are required as described here:

**`id`**  
A unique user-defined ID for the test suite. The value of `id` must match the name of the test suite folder in which the `suite.json` file is located. The suite name and suite version must also meet the following requirements:   
+ `<suite-name>` cannot contain underscores.
+ `<suite-version>` is denoted as `x.x.x`, where `x` is a number.
The ID is shown in IDT-generated test reports.

**`title`**  
A user-defined name for the product or feature being tested by this test suite. The name is displayed in the IDT CLI for test runners.

**`details`**  
A short description of the purpose of the test suite.

**`userDataRequired`**  
Defines whether test runners need to include custom information in a `userdata.json` file. If you set this value to `true`, you must also include the [`userdata_schema.json` file](#userdata-schema-json) in your test suite folder.

**`environmentVariables`**  
Optional. An array of environment variables to set for this test suite.    
**`environmentVariables.key`**  
The name of the environment variable.  
**`environmentVariables.value`**  
The value of the environment variable.

## Configure group.json


The `group.json` file defines whether a test group is required or optional. Use the following template to configure your `<custom-test-suite-folder>/suite/<test-group>/group.json` file: 

```
{
    "id": "<group-id>",
    "title": "<group-title>",
    "details": "<group-details>",
    "optional": true | false,
}
```

All fields that contain values are required as described here:

**`id`**  
A unique user-defined ID for the test group. The value of `id` must match the name of the test group folder in which the `group.json` file is located and should not have underscores (`_`). The ID is used in IDT-generated test reports.

**`title`**  
A descriptive name for the test group. The name is displayed in the IDT CLI for test runners.

**`details`**  
A short description of the purpose of the test group.

**`optional`**  
Optional. Set to `true` to display this test group as an optional group after IDT finishes running required tests. Default value is `false`.

## Configure test.json


The `test.json` file determines the test case executables and the environment variables that are used by a test case. For more information about creating test case executables, see [Create IDT test case executable](test-executables.md).

Use the following template to configure your `<custom-test-suite-folder>/suite/<test-group>/<test-case>/test.json` file: 

```
{
    "id": "<test-id>",
    "title": "<test-title>",
    "details": "<test-details>",
    "requireDUT": true | false,
    "requiredResources": [
        {
            "name": "<resource-name>",
            "features": [
                {
                    "name": "<feature-name>",
                    "version": "<feature-version>",
                    "jobSlots": <job-slots>
                }
            ]
        }
    ],
    "execution": {
        "timeout": <timeout>,
        "mac": {
            "cmd": "/path/to/executable",
            "args": [
                "<argument>"
            ],
        },
        "linux": {
            "cmd": "/path/to/executable",
            "args": [
                "<argument>"
            ],
        },
        "win": {
            "cmd": "/path/to/executable",
            "args": [
                "<argument>"
            ]
        }
    },
    "environmentVariables": [
        {
            "key": "<name>",
            "value": "<value>",
        }
    ]
}
```

All fields that contain values are required as described here:

**`id`**  
A unique user-defined ID for the test case. The value of `id` must match the name of the test case folder in which the `test.json` file is located and should not have underscores (`_`). The ID is used in IDT-generated test reports.

**`title`**  
A descriptive name for the test case. The name is displayed in the IDT CLI for test runners.

**`details`**  
A short description of the purpose of the test case.

**`requireDUT`**  
Optional. Set to `true` if a device is required to run this test, otherwise set to `false`. Default value is `true`. Test runners will configure the devices they will use to run the test in their `device.json` file.

**`requiredResources`**  
Optional. An array that provides information about resource devices needed to run this test.     
**`requiredResources.name`**  
The unique name to give the resource device when this test is running.  
**`requiredResources.features`**  
An array of user-defined resource device features.     
**`requiredResources.features.name`**  
The name of the feature. The device feature for which you want to use this device. This name is matched against the feature name provided by the test runner in the `resource.json` file.  
**`requiredResources.features.version`**  
Optional. The version of the feature. This value is matched against the feature version provided by the test runner in the `resource.json` file. If a version is not provided, then the feature is not checked. If a version number is not required for the feature, leave this field blank.  
**`requiredResources.features.jobSlots`**  
Optional. The number of simultaneous tests that this feature can support. The default value is `1`. If you want IDT to use distinct devices for individual features, then we recommend that you set this value to `1`.

**`execution.timeout`**  
The amount of time (in milliseconds) that IDT waits for the test to finish running. For more information about setting this value, see [Create IDT test case executable](test-executables.md).

**`execution.os`**  
The test case executables to run based on the operating system of the host computer that runs IDT. Supported values are `linux`, `mac`, and `win`.     
**`execution.os.cmd`**  
The path to the test case executable that you want to run for the specified operating system. This location must be in the system path.  
**`execution.os.args`**  
Optional. The arguments to provide to run the test case executable.

**`environmentVariables`**  
Optional. An array of environment variables set for this test case.     
**`environmentVariables.key`**  
The name of the environment variable.  
**`environmentVariables.value`**  
The value of the environment variable.
If you specify the same environment variable in the `test.json` file and in the `suite.json` file, the value in the `test.json` file takes precedence. 

## Configure test\$1orchestrator.yaml


A test orchestrator is a construct that controls the test suite execution flow. It determines the starting state of a test suite, manages state transitions based on user-defined rules, and continues to transition through those states until it reaches the end state. 

If your test suite doesn't include a user-defined test orchestrator, IDT will generate a test orchestrator for you.

The default test orchestrator performs the following functions:
+ Provides test runners with the ability to select and run specific test groups, instead of the entire test suite.
+ If specific test groups are not selected, runs every test group in the test suite in a random order. 
+ Generates reports and prints a console summary that shows the test results for each test group and test case.

For more information about how the IDT test orchestrator functions, see [Configure the IDT test orchestrator](idt-test-orchestrator.md).

## Configure userdata\$1schema.json


The `userdata_schema.json` file determines the schema in which test runners provide user data. User data is required if your test suite requires information that is not present in the `device.json` file. For example, your tests might need Wi-Fi network credentials, specific open ports, or certificates that a user must provide. This information can be provided to IDT as an input parameter called `userdata`, the value for which is a `userdata.json` file, that users create in their `<device-tester-extract-location>/config` folder. The format of the `userdata.json` file is based on the `userdata_schema.json` file that you include in the test suite.

To indicate that test runners must provide a `userdata.json` file:

1. In the `suite.json` file, set `userDataRequired` to `true`.

1. In your `<custom-test-suite-folder>`, create a `userdata_schema.json` file.

1. Edit the `userdata_schema.json` file to create a valid [IETF Draft v4 JSON Schema](https://json-schema.org/specification-links#draft-4).

When IDT runs your test suite, it automatically reads the schema and uses it to validate the `userdata.json` file provided by the test runner. If valid, the contents of the `userdata.json` file are available in both the [IDT context](idt-context.md) and in the [test orchestrator context](idt-test-orchestrator.md#idt-test-orchestrator-context).

# Configure the IDT test orchestrator


Starting in IDT v4.5.2, IDT includes a new *test orchestrator* component. The test orchestrator is an IDT component that controls the test suite execution flow, and generates the test report after IDT finishes running all tests. The test orchestrator determines test selection and the order in which tests are run based on user-defined rules.

If your test suite doesn't include a user-defined test orchestrator, IDT will generate a test orchestrator for you. 

The default test orchestrator performs the following functions:
+ Provides test runners with the ability to select and run specific test groups, instead of the entire test suite.
+ If specific test groups are not selected, runs every test group in the test suite in a random order. 
+ Generates reports and prints a console summary that shows the test results for each test group and test case.

The test orchestrator replaces the IDT state machine. We strongly recommend that you use the test orchestrator to develop your test suites instead of the IDT state machine. The test orchestrator provides the following improved features: 
+ Uses a declarative format compared to the imperative format that the IDT state machine uses. This allows you to specify which tests you want to run and when you want to run them. 
+ Manages specific group handling, report generation, error handling, and result tracking so that you aren't required to manually manage these actions. 
+ Uses the YAML format, which supports comments by default.
+ Requires 80 percent less disk space than the test orchestrator to define the same workflow.
+ Adds pre-test validation to verify that your workflow definition doesn't contain incorrect test IDs or circular dependencies.

## Test orchestrator format


You can use the following template to configure your own `custom-test-suite-folder/suite/test_orchestrator.yaml` file: 

```
Aliases:
  string: context-expression

ConditionalTests:
  - Condition: context-expression
    Tests:
      - test-descriptor

Order:
  - - group-descriptor
    - group-descriptor

Features:
  - Name: feature-name
    Value: support-description
    Condition: context-expression
    Tests:
        - test-descriptor
    OneOfTests:
        - test-descriptor
    IsRequired: boolean
```

All fields that contain values are required as described here:

`Aliases`  
Optional. User-defined strings that map to context expressions. Aliases allow you to generate friendly names to identify context expressions in your test orchestrator configuration. This is especially useful if you're creating complex context expressions or expressions that you use in multiple places.  
You can use context expressions to store context queries that allow you to access data from other IDT configurations. For more information, see [Access data in the context](idt-context.md#accessing-context-data).  

**Example**  
**Example**  

```
Aliases:
    FizzChosen: "'{{$pool.features[?(@.name == 'Fizz')].value[0]}}' == 'yes'"    
    BuzzChosen: "'{{$pool.features[?(@.name == 'Buzz')].value[0]}}' == 'yes'"    
    FizzBuzzChosen: "'{{$aliases.FizzChosen}}' && '{{$aliases.BuzzChosen}}'"
```

`ConditionalTests`  
Optional. A list of conditions, and the corresponding test cases that are run when each condition is satisfied. Each condition can have multiple test cases; however, you can assign a given test case to only one condition.  
By default, IDT runs any test case that isn't assigned to a condition in this list. If you don't specify this section, IDT runs all test groups in the test suite.  
Each item in the `ConditionalTests` list includes the following parameters:    
`Condition`  
A context expression that evaluates to a Boolean value. If the evaluated value is true, IDT runs the test cases that are specified in the `Tests` parameter.  
`Tests`  
The list of test descriptors.   
Each test descriptor uses the test group ID and one or more test case IDs to identify the individual tests to run from a specific test group. The test descriptor uses the following format:  

```
GroupId: group-id
CaseIds: [test-id, test-id] # optional
```

**Example**  
**Example**  
The following example uses generic context expressions that you can define as `Aliases`.  

```
ConditionalTests:
    - Condition: "{{$aliases.Condition1}}"
      Tests:
          - GroupId: A
          - GroupId: B
    - Condition: "{{$aliases.Condition2}}"
      Tests:
          - GroupId: D
    - Condition: "{{$aliases.Condition1}} || {{$aliases.Condition2}}"
      Tests:
          - GroupId: C
```

Based on the defined conditions, IDT selects test groups as follows:
+ If `Condition1` is true, IDT runs the tests in test groups A, B, and C.
+ If `Condition2` is true, IDT runs the tests in test groups C and D.

`Order`  
Optional. The order in which to run tests. You specify the test order at the test group level. If you don't specify this section, IDT runs all applicable test groups in a random order. The value of `Order` is a list of group descriptor lists. Any test group that you don't list in `Order`, can be run in parallel with any other listed test group.  

Each group descriptor list contains one of more group descriptors, and identifies the order in which to run the groups that are specified in each descriptor. You can use the following formats to define individual group descriptors:
+ `group-id`—The group ID of an existing test group.
+ `[group-id, group-id]`—List of test groups that can be run in any order relative to each other.
+ `"*"`—Wildcard. This is equivalent to the list of all test groups that are not already specified in the current group descriptor list.

The value for `Order` must also meet the following requirements:
+ Test group IDs that you specify in a group descriptor must exist in your test suite. 
+ Each group descriptor list must include at least one test group.
+ Each group descriptor list must contain unique group IDs. You cannot repeat a test group ID within individual group descriptors.
+ A group descriptor list can have at most one wildcard group descriptor. The wildcard group descriptor must be the first or the last item in the list.

**Example**  
**Example**  
For a test suite that contains test groups A, B, C, D, and E, the following list of examples shows different ways to specify that IDT should first run test group A, then run test group B, and then run test groups C, D, and E in any order.  
+ 

  ```
  Order:
      - - A
        - B
        - [C, D, E]
  ```
+ 

  ```
  Order:
      - - A
        - B
        - "*"
  ```
+ 

  ```
  Order:
      - - A
        - B
      
      - - B
        - C
      
      - - B
        - D
      
      - - B
        - E
  ```

`Features`  
Optional. The list of product features that you want IDT to add to the `awsiotdevicetester_report.xml` file. If you don't specify this section, IDT won't add any product features to the report.  
A product feature is user-defined information about specific criteria that a device might meet. For example, the MQTT product feature can designate that the device publishes MQTT messages properly. In `awsiotdevicetester_report.xml`, product features are set as `supported`, `not-supported`, or a custom user-defined value, based on whether specified tests passed.  
Each item in the `Features` list consists of the following parameters:    
`Name`  
The name of the feature.  
`Value`  
Optional. The custom value that you want to use in the report instead of `supported`. If this value is not specified, then based IDT sets the feature value to `supported` or `not-supported` based on test results. If you test the same feature with different conditions, you can use a custom value for each instance of that feature in the `Features` list, and IDT concatenates the feature values for supported conditions. For more information, see   
`Condition`  
A context expression that evaluates to a Boolean value. If the evaluated value is true, IDT adds the feature to the test report after it finishes running the test suite. If the evaluated value is false, the test is not included in the report.   
`Tests`  
Optional. The list of test descriptors. All of the tests that are specified in this list must pass for the feature to be supported.   
Each test descriptor in this list uses the test group ID and one or more test case IDs to identify the individual tests to run from a specific test group. The test descriptor uses the following format:  

```
GroupId: group-id
CaseIds: [test-id, test-id] # optional
```
You must specify either `Tests` or `OneOfTests` for each feature in the `Features` list.  
`OneOfTests`  
Optional. The list of test descriptors. At least one of the tests that are specified in this list must pass for the feature to be supported.  
Each test descriptor in this list uses the test group ID and one or more test case IDs to identify the individual tests to run from a specific test group. The test descriptor uses the following format:  

```
GroupId: group-id
CaseIds: [test-id, test-id] # optional
```
You must specify either `Tests` or `OneOfTests` for each feature in the `Features` list.  
`IsRequired`  
The boolean value that defines whether the feature is required in the test report. The default value is `false`.

## Test orchestrator context


The test orchestrator context is a read-only JSON document that contains data that is available to the test orchestrator during execution. The test orchestrator context is accessible only from the test orchestrator, and contains information that determines the test flow. For example, you can use information configured by test runners in the `userdata.json` file to determine whether a specific test is required to run.

The test orchestrator context uses the following format:

```
{
    "pool": {
        <device-json-pool-element>
    },
    "userData": {
        <userdata-json-content>
    },
    "config": {
        <config-json-content>
    }
}
```

`pool`  
Information about the device pool selected for the test run. For a selected device pool, this information is retrieved from the corresponding top-level device pool array element defined in the `device.json` file.

`userData`  
Information in the `userdata.json` file.

`config`  
Information in the `config.json` file.

You can query the context using JSONPath notation. The syntax for JSONPath queries in state definitions is `{{query}}`. When you access data from the test orchestrator context, make sure that each value evaluates to a string, a number, or a Boolean.

For more information about using JSONPath notation to access data from the context, see [Use the IDT context](idt-context.md).

# Configure the IDT state machine


**Important**  
Starting in IDT v4.5.2, this state machine is deprecated. We strongly recommend that you use the new test orchestrator. For more information, see [Configure the IDT test orchestrator](idt-test-orchestrator.md).

A state machine is a construct that controls the test suite execution flow. It determines the starting state of a test suite, manages state transitions based on user-defined rules, and continues to transition through those states until it reaches the end state. 

If your test suite doesn't include a user-defined state machine, IDT will generate a state machine for you. The default state machine performs the following functions:
+ Provides test runners with the ability to select and run specific test groups, instead of the entire test suite.
+ If specific test groups are not selected, runs every test group in the test suite in a random order. 
+ Generates reports and prints a console summary that shows the test results for each test group and test case.

The state machine for an IDT test suite must meet the following criteria:
+ Each state corresponds to an action for IDT to take, such as to run a test group or product a report file.
+ Transitioning to a state executes the action associated with the state.
+ Each state defines the transition rule for the next state.
+ The end state must be either `Succeed` or `Fail`.

## State machine format


You can use the following template to configure your own `<custom-test-suite-folder>/suite/state_machine.json` file: 

```
{
  "Comment": "<description>",
  "StartAt": "<state-name>",
  "States": {
    "<state-name>": {
      "Type": "<state-type>",
      // Additional state configuration
    }
    
    // Required states
    "Succeed": {
      "Type": "Succeed"
    },
    "Fail": {
      "Type": "Fail"
    }
  }
}
```

All fields that contain values are required as described here:

**`Comment`**  
A description of the state machine.

**`StartAt`**  
The name of the state at which IDT starts running the test suite. The value of `StartAt` must be set to one of the states listed in the `States` object.

**`States`**  
An object that maps user-defined state names to valid IDT states. Each States.*state-name* object contains the definition of a valid state mapped to the *state-name*.  
The `States` object must include the `Succeed` and `Fail` states. For information about valid states, see [Valid states and state definitions](#valid-states).

## Valid states and state definitions


This section describes the state definitions of all of the valid states that can be used in the IDT state machine. Some of the following states support configurations at the test case level. However, we recommend that you configure state transition rules at the test group level instead of the test case level unless absolutely necessary.

**Topics**
+ [

### RunTask
](#state-runtask)
+ [

### Choice
](#state-choice)
+ [

### Parallel
](#state-parallel)
+ [

### AddProductFeatures
](#state-addproductfeatures)
+ [

### Report
](#state-report)
+ [

### LogMessage
](#state-logmessage)
+ [

### SelectGroup
](#state-selectgroup)
+ [

### Fail
](#state-fail)
+ [

### Succeed
](#state-succeed)

### RunTask


The `RunTask` state runs test cases from a test group defined in the test suite.

```
{
    "Type": "RunTask",
    "Next": "<state-name>",
    "TestGroup": "<group-id>",
    "TestCases": [
        "<test-id>"
    ],
    "ResultVar": "<result-name>"
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

**`TestGroup`**  
Optional. The ID of the test group to run. If this value is not specified, then IDT runs the test group that the test runner selects.

**`TestCases`**  
Optional. An array of test case IDs from the group specified in `TestGroup`. Based on the values of `TestGroup` and `TestCases`, IDT determines the test execution behavior as follows:   
+ When both `TestGroup` and `TestCases` are specified, IDT runs the specified test cases from the test group. 
+ When `TestCases` are specified but `TestGroup` is not specified, IDT runs the specified test cases.
+ When `TestGroup` is specified, but `TestCases` is not specified, IDT runs all of the test cases within the specified test group.
+ When neither `TestGroup` or `TestCases` is specified, IDT runs all test cases from the test group that the test runner selects from the IDT CLI. To enable group selection for test runners, you must include both `RunTask` and `Choice` states in your `statemachine.json` file. For an example of how this works, see [Example state machine: Run user-selected test groups](#allow-specific-groups).

  For more information about enabling IDT CLI commands for test runners, see [Enable IDT CLI commands](test-executables.md#idt-cli-coop).

**`ResultVar`**  
The name of the context variable to set with the results of the test run. Do not specify this value if you did not specify a value for `TestGroup`. IDT sets the value of the variable that you define in `ResultVar` to `true` or `false` based on the following:   
+ If the variable name is of the form `text_text_passed`, then the value is set to whether all tests in the first test group passed or were skipped.
+ In all other cases, the value is set to whether all tests in all test groups passed or were skipped.

Typically, you will use `RunTask` state to specify a test group ID without specifying individual test case IDs, so that IDT will run all of the test cases in the specified test group. All test cases that are run by this state run in parallel, in a random order. However, if all of the test cases require a device to run, and only a single device is available, then the test cases will run sequentially instead. 

**Error handling**

If any of the specified test groups or test case IDs are not valid, then this state issues the `RunTaskError` execution error. If the state encounters an execution error, then it also sets the `hasExecutionError` variable in the state machine context to `true`.

### Choice


The `Choice` state lets you dynamically set the next state to transition to based on user-defined conditions.

```
{
    "Type": "Choice",
    "Default": "<state-name>", 
    "FallthroughOnError": true | false,
    "Choices": [
        {
            "Expression": "<expression>",
            "Next": "<state-name>"
        }
    ]
}
```

All fields that contain values are required as described here:

**`Default`**  
The default state to transition to if none of the expressions defined in `Choices` can be evaluated to `true`.

**`FallthroughOnError`**  
Optional. Specifies the behavior when the state encounters an error in evaluating expressions. Set to `true` if you want to skip an expression if the evaluation results in an error. If no expressions match, then the state machine transitions to the `Default` state. If the `FallthroughOnError` value is not specified, it defaults to `false`. 

**`Choices`**  
An array of expressions and states to determine which state to transition to after executing the actions in the current state.    
**`Choices.Expression`**  
An expression string that evaluates to a boolean value. If the expression evaluates to `true`, then the state machine transitions to the state defined in `Choices.Next`. Expression strings retrieve values from the state machine context and then perform operations on them to arrive at a boolean value. For information about accessing the state machine context, see [State machine context](#state-machine-context).   
**`Choices.Next`**  
The name of the state to transition to if the expression defined in `Choices.Expression` evaluates to `true`.

**Error handling**

The `Choice` state can require error handling in the following cases: 
+ Some variables in the choice expressions don’t exist in the state machine context.
+ The result of an expression is not a boolean value.
+ The result of a JSON lookup is not a string, number, or boolean.

You cannot use a `Catch` block to handle errors in this state. If you want to stop executing the state machine when it encounters an error, you must set `FallthroughOnError` to `false`. However, we recommend that you set `FallthroughOnError` to `true`, and depending on your use case, do one of the following:
+ If a variable you are accessing is expected to not exist in some cases, then use the value of `Default` and additional `Choices` blocks to specify the next state.
+ If a variable that you are accessing should always exist, then set the `Default` state to `Fail`.

### Parallel


The `Parallel` state lets you define and run new state machines in parallel with each other.

```
{
    "Type": "Parallel",
    "Next": "<state-name>",
    "Branches": [
        <state-machine-definition>
    ]
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

**`Branches`**  
An array of state machine definitions to run. Each state machine definition must contain its own `StartAt`, `Succeed`, and `Fail` states. The state machine definitions in this array cannot reference states outside of their own definition.   
Because each branch state machine shares the same state machine context, setting variables in one branch and then reading those variables from another branch might result in unexpected behavior.

The `Parallel` state moves to the next state only after it runs all of the branch state machines. Each state that requires a device will wait to run until the device is available. If multiple devices are available, this state runs test cases from multiple groups in parallel. If enough devices are not available, then test cases will run sequentially. Because test cases are run in a random order when they run in parallel, different devices might be used to run tests from the same test group. 

**Error handling**

Make sure that both the branch state machine and the parent state machine transition to the `Fail` state to handle execution errors. 

Because branch state machines do not transmit execution errors to the parent state machine, you cannot use a `Catch` block to handle execution errors in branch state machines. Instead, use the `hasExecutionErrors` value in the shared state machine context. For an example of how this works, see [Example state machine: Run two test groups in parallel](#run-in-parallel).

### AddProductFeatures


The `AddProductFeatures` state lets you add product features to the `awsiotdevicetester_report.xml` file generated by IDT. 

A product feature is user-defined information about specific criteria that a device might meet. For example, the `MQTT` product feature can designate that the device publishes MQTT messages properly. In the report, product features are set as `supported`, `not-supported`, or a custom value, based on whether specified tests passed.



**Note**  
The `AddProductFeatures` state does not generate reports by itself. This state must transition to the [`Report` state](#state-report) to generate reports.

```
{
    "Type": "Parallel",
    "Next": "<state-name>",
    "Features": [
        {
            "Feature": "<feature-name>", 
            "Groups": [
                "<group-id>"
            ],
            "OneOfGroups": [
                "<group-id>"
            ],
            "TestCases": [
                "<test-id>"
            ],
            "IsRequired": true | false,
            "ExecutionMethods": [
                "<execution-method>"
            ]
        }
    ]
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

**`Features`**  
An array of product features to show in the `awsiotdevicetester_report.xml` file.    
**`Feature`**  
The name of the feature  
**`FeatureValue`**  
Optional. The custom value to use in the report instead of `supported`. If this value is not specified, then based on test results, the feature value is set to `supported` or `not-supported`.   
If you use a custom value for `FeatureValue`, you can test the same feature with different conditions, and IDT concatenates the feature values for the supported conditions. For example, the following excerpt shows the `MyFeature` feature with two separate feature values:  

```
...
{
    "Feature": "MyFeature",
    "FeatureValue": "first-feature-supported",
    "Groups": ["first-feature-group"]
},
{
    "Feature": "MyFeature",
    "FeatureValue": "second-feature-supported",
    "Groups": ["second-feature-group"]
},
...
```
If both test groups pass, then the feature value is set to `first-feature-supported, second-feature-supported`.   
**`Groups`**  
Optional. An array of test group IDs. All tests within each specified test group must pass for the feature to be supported.  
**`OneOfGroups`**  
Optional. An array of test group IDs. All tests within at least one of the specified test groups must pass for the feature to be supported.   
**`TestCases`**  
Optional. An array of test case IDs. If you specify this value, then the following apply:  
+ All of the specified test cases must pass for the feature to be supported.
+ `Groups` must contain only one test group ID.
+ `OneOfGroups` must not be specified.  
**`IsRequired`**  
Optional. Set to `false` to mark this feature as an optional feature in the report. The default value is `true`.  
**`ExecutionMethods`**  
Optional. An array of execution methods that match the `protocol` value specified in the `device.json` file. If this value is specified, then test runners must specify a `protocol` value that matches one of the values in this array to include the feature in the report. If this value is not specified, then the feature will always be included in the report.

To use the `AddProductFeatures` state, you must set the value of `ResultVar` in the `RunTask` state to one of the following values:
+ If you specified individual test case IDs, then set `ResultVar` to `group-id_test-id_passed`.
+ If you did not specify individual test case IDs, then set `ResultVar` to `group-id_passed`.

The `AddProductFeatures` state checks for test results in the following manner: 
+ If you did not specify any test case IDs, then the result for each test group is determined from the value of the `group-id_passed` variable in the state machine context.
+ If you did specify test case IDs, then the result for each of the tests is determined from the value of the `group-id_test-id_passed` variable in the state machine context.

**Error handling**

If a group ID provided in this state is not a valid group ID, then this state results in the `AddProductFeaturesError` execution error. If the state encounters an execution error, then it also sets the `hasExecutionErrors` variable in the state machine context to `true`.

### Report


The `Report` state generates the `suite-name_Report.xml` and `awsiotdevicetester_report.xml` files. This state also streams the report to the console.

```
{
    "Type": "Report",
    "Next": "<state-name>"
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

You should always transition to the `Report` state towards the end of the test execution flow so that test runners can view test results. Typically, the next state after this state is `Succeed`. 

**Error handling**

If this state encounters issues with generating the reports, then it issues the `ReportError` execution error. 

### LogMessage


The `LogMessage` state generates the `test_manager.log` file and streams the log message to the console.

```
{
    "Type": "LogMessage",
    "Next": "<state-name>"
    "Level": "info | warn | error"
    "Message": "<message>"
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

**`Level`**  
The error level at which to create the log message. If you specify a level that is not valid, this state generates an error message and discards it. 

**`Message`**  
The message to log.

### SelectGroup


The `SelectGroup` state updates the state machine context to indicate which groups are selected. The values set by this state are used by any subsequent `Choice` states.

```
{
    "Type": "SelectGroup",
    "Next": "<state-name>"
    "TestGroups": [
        <group-id>"
    ]
}
```

All fields that contain values are required as described here:

**`Next`**  
The name of the state to transition to after executing the actions in the current state.

**`TestGroups`**  
An array of test groups that will be marked as selected. For each test group ID in this array, the `group-id_selected` variable is set to `true` in the context. Make sure that you provide valid test group IDs because IDT does not validate whether the specified groups exist.

### Fail


The `Fail` state indicates that the state machine did not execute correctly. This is an end state for the state machine, and each state machine definition must include this state.

```
{
    "Type": "Fail"
}
```

### Succeed


The `Succeed` state indicates that the state machine executed correctly. This is an end state for the state machine, and each state machine definition must include this state.

```
{
    "Type": "Succeed"
}
```

## State machine context


The state machine context is a read-only JSON document that contains data that is available to the state machine during execution. The state machine context is accessible only from the state machine, and contains information that determines the test flow. For example, you can use information configured by test runners in the `userdata.json` file to determine whether a specific test is required to run.

The state machine context uses the following format:

```
{
    "pool": {
        <device-json-pool-element>
    },
    "userData": {
        <userdata-json-content>
    },
    "config": {
        <config-json-content>
    },
    "suiteFailed": true | false,
    "specificTestGroups": [
        "<group-id>"
    ],
    "specificTestCases": [
        "<test-id>"
    ],
    "hasExecutionErrors": true
}
```

**`pool`**  
Information about the device pool selected for the test run. For a selected device pool, this information is retrieved from the corresponding top-level device pool array element defined in the `device.json` file.

**`userData`**  
Information in the `userdata.json` file.

**`config`**  
Information pin the `config.json` file.

**`suiteFailed`**  
The value is set to `false` when the state machine starts. If a test group fails in a `RunTask` state, then this value is set to `true` for the remaining duration of the state machine execution.

**`specificTestGroups`**  
If the test runner selects specific test groups to run instead of the entire test suite, this key is created and contains the list of specific test group IDs.

**`specificTestCases`**  
If the test runner selects specific test cases to run instead of the entire test suite, this key is created and contains the list of specific test case IDs.

**`hasExecutionErrors`**  
Does not exit when the state machine starts. If any state encounters an execution errors, this variable is created and set to `true` for the remaining duration of the state machine execution.

You can query the context using JSONPath notation. The syntax for JSONPath queries in state definitions is `{{$.query}}`. You can use JSONPath queries as placeholder strings within some states. IDT replaces the placeholder strings with the value of the evaluated JSONPath query from the context. You can use placeholders for the following values:
+ The `TestCases` value in `RunTask` states. 
+ The `Expression` value `Choice` state.

When you access data from the state machine context, make sure the following conditions are met: 
+ Your JSON paths must begin with `$.`
+ Each value must evaluate to a string, a number, or a boolean.

For more information about using JSONPath notation to access data from the context, see [Use the IDT context](idt-context.md).

## Execution errors


Execution errors are errors in the state machine definition that the state machine encounters when executing a state. IDT logs information about each error in the `test_manager.log` file and streams the log message to the console.

You can use the following methods to handle execution errors:
+ Add a [`Catch` block](#catch) in the state definition.
+ Check the value of the [`hasExecutionErrors` value](#context) in the state machine context.

### Catch


To use `Catch`, add the following to your state definition:

```
"Catch": [
    {    
        "ErrorEquals": [
            "<error-type>"
        ]
        "Next": "<state-name>" 
    }
]
```

All fields that contain values are required as described here:

**`Catch.ErrorEquals`**  
An array of the error types to catch. If an execution error matches one of the specified values, then the state machine transitions to the state specified in `Catch.Next`. See each state definition for information about the type of error it produces.

**`Catch.Next`**  
The next state to transition to if the current state encounters an execution error that matches one of the values specified in `Catch.ErrorEquals` .

Catch blocks are handled sequentially until one matches. If the no errors match the ones listed in the Catch blocks, then the state machines continues to execute. Because execution errors are a result of incorrect state definitions, we recommend that you transition to the Fail state when a state encounters an execution error.

### hasExecutionError


When some states encounter execution errors, in addition to issuing the error, they also set the `hasExecutionError` value to `true` in the state machine context. You can use this value to detect when an error occurs, and then use a `Choice` state to transition the state machine to the `Fail` state.

This method has the following characteristics.
+ The state machine does not start with any value assigned to `hasExecutionError`, and this value is not available until a particular state sets it. This means that you must explicitly set the `FallthroughOnError` to `false` for the `Choice` states that access this value to prevent the state machine from stopping if no execution errors occur. 
+ Once it is set to `true`, `hasExecutionError` is never set to false or removed from the context. This means that this value is useful only the first time that it is set to `true`, and for all subsequent states, it does not provide a meaningful value.
+ The `hasExecutionError` value is shared with all branch state machines in the `Parallel` state, which can result in unexpected results depending on the order in which it is accessed.

Because of these characteristics, we do not recommend that you use this method if you can use a Catch block instead. 

## Example state machines


This section provides some example state machine configurations.

**Topics**
+ [

### Example state machine: Run a single test group
](#single-test-group)
+ [

### Example state machine: Run user-selected test groups
](#allow-specific-groups)
+ [

### Example state machine: Run a single test group with product features
](#run-with-product-features)
+ [

### Example state machine: Run two test groups in parallel
](#run-in-parallel)

### Example state machine: Run a single test group


This state machine:
+ Runs the test group with id `GroupA`, which must be present in the suite in a `group.json` file.
+ Checks for execution errors and transitions to `Fail` if any are found.
+ Generates a report and transitions to `Succeed` if there are no errors, and `Fail` otherwise.

```
{
    "Comment": "Runs a single group and then generates a report.",
    "StartAt": "RunGroupA",
    "States": {
        "RunGroupA": {
            "Type": "RunTask",
            "Next": "Report",
            "TestGroup": "GroupA",
            "Catch": [
                {
                    "ErrorEquals": [
                        "RunTaskError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Report": {
            "Type": "Report",
            "Next": "Succeed",
            "Catch": [
                {
                    "ErrorEquals": [
                        "ReportError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Succeed": {
            "Type": "Succeed"
        },
        "Fail": {
            "Type": "Fail"
        }
    }
}
```

### Example state machine: Run user-selected test groups


This state machine:
+ Checks if the test runner selected specific test groups. The state machine does not check for specific test cases because test runners cannot select test cases without also selecting a test group.
+ If test groups are selected: 
  + Runs the test cases within the selected test groups. To do so, the state machine does not explicitly specify any test groups or test cases in the `RunTask` state.
  + Generates a report after running all tests and exits.
+ If test groups are not selected:
  + Runs tests in test group `GroupA`.
  + Generates reports and exits.

```
{
    "Comment": "Runs specific groups if the test runner chose to do that, otherwise runs GroupA.",
    "StartAt": "SpecificGroupsCheck",
    "States": {
        "SpecificGroupsCheck": {
            "Type": "Choice",
            "Default": "RunGroupA",
            "FallthroughOnError": true,
            "Choices": [
                {
                    "Expression": "{{$.specificTestGroups[0]}} != ''",
                    "Next": "RunSpecificGroups"
                }
            ]
        },
        "RunSpecificGroups": {
            "Type": "RunTask",
            "Next": "Report",
            "Catch": [
                {
                    "ErrorEquals": [
                        "RunTaskError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "RunGroupA": {
            "Type": "RunTask",
            "Next": "Report",
            "TestGroup": "GroupA",
            "Catch": [
                {
                    "ErrorEquals": [
                        "RunTaskError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Report": {
            "Type": "Report",
            "Next": "Succeed",
            "Catch": [
                {
                    "ErrorEquals": [
                        "ReportError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Succeed": {
            "Type": "Succeed"
        },
        "Fail": {
            "Type": "Fail"
        }
    }
}
```

### Example state machine: Run a single test group with product features


This state machine:
+ Runs the test group `GroupA`.
+ Checks for execution errors and transitions to `Fail` if any are found.
+ Adds the `FeatureThatDependsOnGroupA` feature to the `awsiotdevicetester_report.xml` file:
  + If `GroupA` passes, the feature is set to `supported`.
  + The feature is not marked optional in the report.
+ Generates a report and transitions to `Succeed` if there are no errors, and `Fail` otherwise

```
{
    "Comment": "Runs GroupA and adds product features based on GroupA",
    "StartAt": "RunGroupA",
    "States": {
        "RunGroupA": {
            "Type": "RunTask",
            "Next": "AddProductFeatures",
            "TestGroup": "GroupA",
            "ResultVar": "GroupA_passed",
            "Catch": [
                {
                    "ErrorEquals": [
                        "RunTaskError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "AddProductFeatures": {
            "Type": "AddProductFeatures",
            "Next": "Report",
            "Features": [
                {
                    "Feature": "FeatureThatDependsOnGroupA",
                    "Groups": [
                        "GroupA"
                    ],
                    "IsRequired": true
                }
            ]
        },
        "Report": {
            "Type": "Report",
            "Next": "Succeed",
            "Catch": [
                {
                    "ErrorEquals": [
                        "ReportError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Succeed": {
            "Type": "Succeed"
        },
        "Fail": {
            "Type": "Fail"
        }
    }
}
```

### Example state machine: Run two test groups in parallel


This state machine:
+ Runs the `GroupA` and `GroupB` test groups in parallel. The `ResultVar` variables stored in the context by the `RunTask` states in the branch state machines by are available to the `AddProductFeatures` state.
+ Checks for execution errors and transitions to `Fail` if any are found. This state machine does not use a `Catch` block because that method does not detect execution errors in branch state machines.
+ Adds features to the `awsiotdevicetester_report.xml` file based on the groups that pass
  + If `GroupA` passes, the feature is set to `supported`.
  + The feature is not marked optional in the report.
+ Generates a report and transitions to `Succeed` if there are no errors, and `Fail` otherwise

If two devices are configured in the device pool, both `GroupA` and `GroupB` can run at the same time. However, if either `GroupA` or `GroupB` has multiple tests in it, then both devices may be allocated to those tests. If only one device is configured, the test groups will run sequentially.

```
{
    "Comment": "Runs GroupA and GroupB in parallel",
    "StartAt": "RunGroupAAndB",
    "States": {
        "RunGroupAAndB": {
            "Type": "Parallel",
            "Next": "CheckForErrors",
            "Branches": [
                {
                    "Comment": "Run GroupA state machine",
                    "StartAt": "RunGroupA",
                    "States": {
                        "RunGroupA": {
                            "Type": "RunTask",
                            "Next": "Succeed",
                            "TestGroup": "GroupA",
                            "ResultVar": "GroupA_passed",
                            "Catch": [
                                {
                                    "ErrorEquals": [
                                        "RunTaskError"
                                    ],
                                    "Next": "Fail"
                                }
                            ]
                        },
                        "Succeed": {
                            "Type": "Succeed"
                        },
                        "Fail": {
                            "Type": "Fail"
                        }
                    }
                },
                {
                    "Comment": "Run GroupB state machine",
                    "StartAt": "RunGroupB",
                    "States": {
                        "RunGroupA": {
                            "Type": "RunTask",
                            "Next": "Succeed",
                            "TestGroup": "GroupB",
                            "ResultVar": "GroupB_passed",
                            "Catch": [
                                {
                                    "ErrorEquals": [
                                        "RunTaskError"
                                    ],
                                    "Next": "Fail"
                                }
                            ]
                        },
                        "Succeed": {
                            "Type": "Succeed"
                        },
                        "Fail": {
                            "Type": "Fail"
                        }
                    }
                }
            ]
        },
        "CheckForErrors": {
            "Type": "Choice",
            "Default": "AddProductFeatures",
            "FallthroughOnError": true,
            "Choices": [
                {
                    "Expression": "{{$.hasExecutionErrors}} == true",
                    "Next": "Fail"
                }
            ]
        },
        "AddProductFeatures": {
            "Type": "AddProductFeatures",
            "Next": "Report",
            "Features": [
                {
                    "Feature": "FeatureThatDependsOnGroupA",
                    "Groups": [
                        "GroupA"
                    ],
                    "IsRequired": true
                },
                {
                    "Feature": "FeatureThatDependsOnGroupB",
                    "Groups": [
                        "GroupB"
                    ],
                    "IsRequired": true
                }
            ]
        },
        "Report": {
            "Type": "Report",
            "Next": "Succeed",
            "Catch": [
                {
                    "ErrorEquals": [
                        "ReportError"
                    ],
                    "Next": "Fail"
                }
            ]
        },
        "Succeed": {
            "Type": "Succeed"
        },
        "Fail": {
            "Type": "Fail"
        }
    }
}
```

# Create IDT test case executable


You can create and place test case executable in a test suite folder in the following ways:
+ For test suites that use arguments or environment variables from the `test.json` files to determine which tests to run, you can create a single test case executable for the entire test suite, or a test executable for each test group in the test suite.
+ For a test suite where you want to run specific tests based on specified commands, you create one test case executable for each test case in the test suite.

As a test writer, you can determine which approach is appropriate for your use case and structure your test case executable accordingly. Make sure that your provide the correct test case executable path in each `test.json` file, and that the specified executable runs correctly. 

When all devices are ready for a test case to run, IDT reads the following files:
+ The `test.json` for the selected test case determines the processes to start and the environment variables to set.
+ The `suite.json` for the test suite determines the environment variables to set. 

IDT starts the required test executable process based on the commands and arguments specified in the `test.json` file, and passes the required environment variables to the process. 

## Use the IDT Client SDK


The IDT Client SDKs let you simplify how you write test logic in your test executable with API commands that you can use interact with IDT and your devices under test. IDT currently provides the following SDKs: 
+ IDT Client SDK for Python
+ IDT Client SDK for Go
+ IDT Client SDK for Java

These SDKs are located in the `<device-tester-extract-location>/sdks` folder. When you create a new test case executable, you must copy the SDK that you want to use to the folder that contains your test case executable and reference the SDK in your code. This section provides a brief description of the available API commands that you can use in your test case executables. 

**Topics**
+ [

### Device interaction
](#api-device-interaction)
+ [

### IDT interaction
](#api-idt-interaction)
+ [

### Host interaction
](#api-host-interaction)

### Device interaction


The following commands enable you to communicate with the device under test without having to implement any additional device interaction and connectivity management functions.

**`ExecuteOnDevice`**  
Allows test suites to run shell commands on a device that support SSH or Docker shell connections.

**`CopyToDevice`**  
Allows test suites to copy a local file from the host machine that runs IDT to a specified location on a device that supports SSH or Docker shell connections.

**`ReadFromDevice`**  
Allows test suites to read from the serial port of devices that support UART connections.

**Note**  
Because IDT does not manage direct connections to devices that are made using device access information from the context, we recommend using these device interaction API commands in your test case executables. However, if these commands do not meet your test case requirements, then you can retrieve device access information from the IDT context and use it to make a direct connection to the device from the test suite.   
To make a direct connection, retrieve the information in the `device.connectivity` and the `resource.devices.connectivity` fields for your device under test and for resource devices, respectively. For more information about using the IDT context, see [Use the IDT context](idt-context.md). 

### IDT interaction


The following commands enable your test suites to communicate with IDT.

**`PollForNotifications`**  
Allows test suites to check for notifications from IDT.

**`GetContextValue ` and `GetContextString`**  
Allows test suites to retrieve values from the IDT context. For more information, see [Use the IDT context](idt-context.md).

**`SendResult`**  
Allows test suites to report test case results to IDT. This command must be called at the end of each test case in a test suite.

### Host interaction


The following command enable your test suites to communicate with the host machine.

**`PollForNotifications`**  
Allows test suites to check for notifications from IDT.

**`GetContextValue` and `GetContextString`**  
Allows test suites to retrieve values from the IDT context. For more information, see [Use the IDT context](idt-context.md).

**`ExecuteOnHost`**  
Allows test suites to run commands on the local machine and lets IDT manage the test case executable lifecycle.

## Enable IDT CLI commands


The `run-suite` command IDT CLI provides several options that let test runner customize test execution. To allow test runners to use these options to run your custom test suite, you implement support for the IDT CLI. If you do not implement support, test runners will still be able to run tests, but some CLI options will not function correctly. To provide an ideal customer experience, we recommend that you implement support for the following arguments for the `run-suite` command in the IDT CLI:

**`timeout-multiplier`**  
Specifies a value greater than 1.0 that will be applied to all timeouts while running tests.   
Test runners can use this argument to increase the timeout for the test cases that they want to run. When a test runner specifies this argument in their `run-suite` command, IDT uses it to calculate the value of the IDT\$1TEST\$1TIMEOUT environment variable and sets the `config.timeoutMultiplier` field in the IDT context. To support this argument, you must do the following:  
+ Instead of directly using the timeout value from the `test.json` file, read the IDT\$1TEST\$1TIMEOUT environment variable to obtain the correctly calculated timeout value.
+ Retrieve the `config.timeoutMultiplier` value from the IDT context and apply it to long running timeouts.
For more information about exiting early because of timeout events, see [Specify exit behavior](#test-exec-exiting).

**`stop-on-first-failure`**  
Specifies that IDT should stop running all tests if it encounters a failure.   
When a test runner specifies this argument in their `run-suite` command, IDT will stop running tests as soon as it encounters a failure. However, if test cases are running in parallel, then this can lead to unexpected results. To implement support, make sure that if IDT encounters this event, your test logic instructs all running test cases to stop, clean up temporary resources, and report a test result to IDT. For more information about exiting early on failures, see [Specify exit behavior](#test-exec-exiting).

**`group-id` and `test-id`**  
Specifies that IDT should run only the selected test groups or test cases.   
Test runners can use these arguments with their `run-suite` command to specify the following test execution behavior:   
+ Run all tests inside the specified test groups.
+ Run a selection of tests from within a specified test group.
To support these arguments, the state machine for your test suite must include a specific set of `RunTask` and `Choice` states in your state machine. If you are not using a custom state machine, then the default IDT state machine includes the required states for you and you do not need to take additional action. However, if you are using a custom state machine, then use [Example state machine: Run user-selected test groups](idt-state-machine.md#allow-specific-groups) as a sample to add the required states in your state machine.

For more information about IDT CLI commands, see [Debug and run custom test suites](run-tests-custom.md).

## Write event logs


While the test is running, you send data to `stdout` and `stderr` to write event logs and error messages to the console. For information about the format of console messages, see [Console message format](idt-review-results-logs.md#idt-console-format).

When the IDT finishes running the test suite, this information is also available in the `test_manager.log` file located in the `<devicetester-extract-location>/results/<execution-id>/logs` folder.

You can configure each test case to write the logs from its test run, including logs from the device under test, to the `<group-id>_<test-id>` file located in the `<device-tester-extract-location>/results/execution-id/logs` folder. To do this, retrieve the path to the log file from the IDT context with the `testData.logFilePath` query, create a file at that path, and write the content that you want to it. IDT automatically updates the path based on the test case that is running. If you choose not to create the log file for a test case, then no file is generated for that test case.

You can also set up your text executable to create additional log files as needed in the `<device-tester-extract-location>/logs` folder. We recommend that you specify unique prefixes for log file names so your files don't get overwritten.

## Report results to IDT


IDT writes test results to the `awsiotdevicetester_report.xml` and the `suite-name_report.xml` files. These report files are located in `<device-tester-extract-location>/results/<execution-id>/`. Both reports capture the results from the test suite execution. For more information about the schemas that IDT uses for these reports, see [Review IDT test results and logs](idt-review-results-logs.md)

To populate the contents of the `suite-name_report.xml` file, you must use the `SendResult` command to report test results to IDT before the test execution finishes. If IDT cannot locate the results of a test, it issues an error for the test case. The following Python excerpt shows the commands to send a test result to IDT:

```
request-variable = SendResultRequest(TestResult(result))
client.send_result(request-variable)
```

If you do not report results through the API, IDT looks for test results in the test artifacts folder. The path to this folder is stored in the `testData.testArtifactsPath` filed in the IDT context. In this folder, IDT uses the first alphabetically sorted XML file it locates as the test result. 

If your test logic produces JUnit XML results, you can write the test results to an XML file in the artifacts folder to directly provide the results to IDT instead of parsing the results and then using the API to submit them to IDT. 

If you use this method, make sure that your test logic accurately summarizes the test results and format your result file in the same format as the `suite-name_report.xml` file. IDT does not perform any validation of the data that you provide, with the following exceptions:
+ IDT ignores all properties of the `testsuites` tag. Instead, it calculates the tag properties from other reported test group results.
+ At least one `testsuite` tag must exist within `testsuites`.

Because IDT uses the same artifacts folder for all test cases and does not delete result files between test runs, this method might also lead to erroneous reporting if IDT reads the incorrect file. We recommend that you use the same name for the generated XML results file across all test cases to overwrite the results for each test case and make sure that the correct results are available for IDT to use. Although you can use a mixed approach to reporting in your test suite, that is, use an XML result file for some test cases and submit results through the API for others, we do not recommend this approach.

## Specify exit behavior


Configure your text executable to always exit with an exit code of 0, even if a test case reports a failure or an error result. Use non-zero exit codes only to indicate that a test case did not run or if the test case executable could not communicate any results to IDT. When IDT receives a non-zero exit code, it marks the test case has having encountered an error that prevented it from running.

IDT might request or expect a test case to stop running before it has finished in the following events. Use this information to configure your test case executable to detect each of these events from the test case:

****Timeout****  
Occurs when a test case runs for longer than the timeout value specified in the `test.json` file. If the test runner used the `timeout-multiplier` argument to specify a timeout multiplier, then IDT calculates the timeout value with the multiplier.   
To detect this event, use the IDT\$1TEST\$1TIMEOUT environment variable. When a test runner launches a test, IDT sets the value of the IDT\$1TEST\$1TIMEOUT environment variable to the calculated timeout value (in seconds) and passes the variable to the test case executable. You can read the variable value to set an appropriate timer.

****Interrupt****  
Occurs when the test runner interrupts IDT. For example, by pressing Ctrl\$1C.  
Because terminals propagate signals to all child processes, you can simply configure a signal handler in your test cases to detect interrupt signals.   
Alternatively, you can periodically poll the API to check the value of the `CancellationRequested` boolean in the `PollForNotifications` API response. When IDT receives an interrupt signal, it sets the value of the `CancellationRequested` boolean to `true`.

****Stop on first failure****  
Occurs when a test case that is running in parallel with the current test case fails and the test runner used the `stop-on-first-failure` argument to specify that IDT should stop when it encounters any failure.  
To detect this event, you can periodically poll the API to check the value of the `CancellationRequested` boolean in the `PollForNotifications` API response. When IDT encounters a failure and is configured to stop on first failure, it sets the value of the `CancellationRequested` boolean to `true`.

When any of these events occur, IDT waits for 5 minutes for any currently running test cases to finish running. If all running test cases do not exit within 5 minutes, IDT forces each of their processes to stop. If IDT has not received test results before the processes end, it will mark the test cases as having timed out. As a best practice, you should ensure that your test cases perform the following actions when they encounter one of the events:

1. Stop running normal test logic.

1. Clean up any temporary resources, such as test artifacts on the device under test.

1. Report a test result to IDT, such as a test failure or an error. 

1. Exit.

# Use the IDT context


When IDT runs a test suite, the test suite can access a set of data that can be used to determine how each test runs. This data is called the IDT context. For example, user data configuration provided by test runners in a `userdata.json` file is made available to test suites in the IDT context. 

The IDT context can be considered a read-only JSON document. Test suites can retrieve data from and write data to the context using standard JSON data types like objects, arrays, numbers and so on.

## Context schema


The IDT context uses the following format:

```
{
    "config": {
        <config-json-content>
        "timeoutMultiplier": timeout-multiplier,
        "idtRootPath": <path/to/IDT/root>
    },
    "device": {
        <device-json-device-element>
    },
    "devicePool": {
        <device-json-pool-element>
    },
    "resource": {
        "devices": [
            {
                <resource-json-device-element>
                "name": "<resource-name>"
            }
        ]
    },
    "testData": {
        "awsCredentials": {
            "awsAccessKeyId": "<access-key-id>",
            "awsSecretAccessKey": "<secret-access-key>",
            "awsSessionToken": "<session-token>"
        },
        "logFilePath": "/path/to/log/file"
    },
    "userData": {
        <userdata-json-content>
    }
}
```

**`config`**  
Information from the [`config.json` file](set-config-custom.md#config-json-custom). The `config` field also contains the following additional fields:    
**`config.timeoutMultiplier`**  
The multiplier for the any timeout value used by the test suite. This value is specified by the test runner from the IDT CLI. The default value is `1`.  
**`config.idRootPath`**  
This value is a placeholder for the absolute path value of IDT while configuring the `userdata.json` file. This is used by the build and flash commands.

**`device`**  
Information about the device selected for the test run. This information is equivalent to the `devices` array element in the [`device.json` file](set-config-custom.md#device-config-custom) for the selected device.

**`devicePool`**  
Information about the device pool selected for the test run. This information is equivalent to the top-level device pool array element defined in the `device.json` file for the selected device pool.

**`resource`**  
Information about resource devices from the `resource.json` file.    
**`resource.devices`**  
This information is equivalent to the `devices` array defined in the `resource.json` file. Each `devices` element includes the following additional field:    
**`resource.device.name`**  
The name of the resource device. This value is set to the `requiredResource.name` value in the `test.json` file.

**`testData.awsCredentials`**  
The Amazon credentials used by the test to connect to the Amazon cloud. This information is obtained from the `config.json` file.

**`testData.logFilePath`**  
The path to the log file to which the test case writes log messages. The test suite creates this file if it doesn't exist. 

**`userData`**  
Information provided by the test runner in the [`userdata.json` file](set-config-custom.md#userdata-config-custom).

## Access data in the context


You can query the context using JSONPath notation from your configuration files and from your text executable with the `GetContextValue` and `GetContextString` APIs. The syntax for JSONPath strings to access the IDT context varies as follows:
+ In `suite.json` and `test.json`, you use `{{query}}`. That is, do not use the root element `$.` to start your expression.
+ In `statemachine.json`, you use `{{$.query}}`.
+ In API commands, you use `query` or `{{$.query}}`, depending on the command. For more information, see the inline documentation in the SDKs. 

The following table describes the operators in a typical foobar JSONPath expression:


| Operator  | Description  | 
| --- | --- | 
| \$1 | The root element. Because the top-level context value for IDT is an object, you will typically use \$1. to start your queries. | 
| .childName | Accesses the child element with name childName from an object. If applied to an array, yields a new array with this operator applied to each element. The element name is case sensitive. For example, the query to access the awsRegion value in the config object is \$1.config.awsRegion. | 
| [start:end] | Filters elements from an array, retrieving items beginning from the start index and going up to the end index, both inclusive. | 
| [index1, index2, ... , indexN] | Filters elements from an array, retrieving items from only the specified indices. | 
| [?(expr)] | Filters elements from an array using the expr expression. This expression must evaluate to a boolean value. | 

To create filter expressions, use the following syntax:

```
<jsonpath> | <value> operator <jsonpath> | <value> 
```

In this syntax: 
+ `jsonpath` is a JSONPath that uses standard JSON syntax. 
+ `value` is any custom value that uses standard JSON syntax.
+ `operator` is one of the following operators:
  + `<` (Less than)
  + `<=` (Less than or equal to)
  + `==` (Equal to)

    If the JSONPath or value in your expression is an array, boolean, or object value, then this is the only supported binary operator that you can use.
  + `>=` (Greater than or equal to)
  + `>` (Greater than)
  + `=~` (Regular expression match). To use this operator in a filter expression, the JSONPath or value on the left side of your expression must evaluate to a string and the right side must be a pattern value that follows the [RE2 syntax](https://github.com/google/re2/wiki/Syntax).

You can use JSONPath queries in the form \$1\$1*query*\$1\$1 as placeholder strings within the `args` and `environmentVariables` fields in `test.json` files and within the `environmentVariables` fields in `suite.json` files. IDT performs a context lookup and populates the fields with the evaluated value of the query. For example, in the `suite.json` file, you can use placeholder strings to specify environment variable values that change with each test case and IDT will populate the environment variables with the correct value for each test case. However, when you use placeholder strings in `test.json` and `suite.json` files, the following considerations apply for your queries:
+ You must each occurrence of the `devicePool` key in your query in all lower case. That is, use `devicepool` instead.
+ For arrays, you can use only arrays of strings. In addition, arrays use a non-standard `item1, item2,...,itemN` format. If the array contains only one element, then it is serialized as `item`, making it indistinguishable from a string field. 
+ You cannot use placeholders to retrieve objects from the context.

Because of these considerations, we recommend that whenever possible, you use the API to access the context in your test logic instead of placeholder strings in `test.json` and `suite.json` files. However, in some cases it might be more convenient to use JSONPath placeholders to retrieve single strings to set as environment variables. 

# Configure settings for test runners


To run custom test suites, test runners must configure their settings based on the test suite that they want to run. Settings are specified based on configuration file templates located in the `<device-tester-extract-location>/configs/` folder. If required, test runners must also set up Amazon credentials that IDT will use to connect to the Amazon cloud. 

As a test writer, you will need to configure these files to [debug your test suite](run-tests-custom.md). You must provide instructions to test runners so that they can configure the following settings as needed to run your test suites. 

## Configure device.json


The `device.json` file contains information about the devices that tests are run on (for example, IP address, login information, operating system, and CPU architecture). 

Test runners can provide this information using the following template `device.json` file located in the `<device-tester-extract-location>/configs/` folder.

```
[
    {
        "id": "<pool-id>",
        "sku": "<pool-sku>",
        "features": [
            {
                "name": "<feature-name>",             
                "value": "<feature-value>",                
                "configs": [
                    {
                        "name": "<config-name>",                    
                        "value": "<config-value>"
                    }
                ],
            }
        ],     
        "devices": [
            {
                "id": "<device-id>",    
                "pairedResource": "<device-id>", //used for no-op protocol
                "connectivity": {
                    "protocol": "ssh | uart | docker | no-op",                   
                    // ssh
                    "ip": "<ip-address>",
                    "port": <port-number>,
                    "publicKeyPath": "<public-key-path>",
                    "auth": {
                        "method": "pki | password",
                        "credentials": {
                            "user": "<user-name>", 
                            // pki
                            "privKeyPath": "/path/to/private/key",
                                         
                            // password
                            "password": "<password>",
                        }
                    },
                    
                    // uart
                    "serialPort": "<serial-port>",
                    
                    // docker
                    "containerId": "<container-id>",
                    "containerUser": "<container-user-name>",
                }
            }
        ]
    }
]
```

All fields that contain values are required as described here:

**`id`**  
A user-defined alphanumeric ID that uniquely identifies a collection of devices called a *device pool*. Devices that belong to a pool must have identical hardware. When you run a suite of tests, devices in the pool are used to parallelize the workload. Multiple devices are used to run different tests.

**`sku`**  
An alphanumeric value that uniquely identifies the device under test. The SKU is used to track qualified devices.  
If you want to list your board in the Amazon Partner Device Catalog, the SKU you specify here must match the SKU that you use in the listing process.

**`features`**  
Optional. An array that contains the device's supported features. Device features are user-defined values that you configure in your test suite. You must provide your test runners with information about the feature names and values to include in the `device.json` file. For example, if you want to test a device that functions as an MQTT server for other devices, then you can configure your test logic to validate specific supported levels for a feature named `MQTT_QoS`. Test runners provide this feature name and set the feature value to the QoS levels supported by their device. You can retrieve the provided information from the [IDT context](idt-context.md) with the `devicePool.features` query, or from the [state machine context](idt-state-machine.md#state-machine-context) with the `pool.features` query.    
**`features.name`**  
The name of the feature.  
**`features.value`**  
The supported feature values.  
**`features.configs`**  
Configuration settings, if needed, for the feature.    
**`features.config.name`**  
The name of the configuration setting.  
**`features.config.value`**  
The supported setting values.

**`devices`**  
An array of devices in the pool to be tested. At least one device is required.    
**`devices.id`**  
A user-defined unique identifier for the device being tested.  
**`devices.pairedResource`**  
A user-defined unique identifier for a resource device. This value is required when you test devices using the `no-op` connectivity protocol.  
**`connectivity.protocol`**  
The communication protocol used to communicate with this device. Each device in a pool must use the same protocol.  
Currently, the only supported values are `ssh` and `uart` for physical devices, `docker` for Docker containers, and `no-op` for devices who don't have a direct connection with the IDT host machine but require a resource device as physical middleware to communicate with the host machine.   
For no-op devices, you configure the resource device ID in `devices.pairedResource`. You must also specify this ID in the `resource.json` file. The paired device must be a device that is physically paired with the device under test. After IDT identifies and connects to the paired resource device, IDT will not connect to other resource devices according to the features described in the `test.json` file.  
**`connectivity.ip`**  
The IP address of the device being tested.  
This property applies only if `connectivity.protocol` is set to `ssh`.  
**`connectivity.port`**  
Optional. The port number to use for SSH connections.  
The default value is 22.  
This property applies only if `connectivity.protocol` is set to `ssh`.  
**`connectivity.publicKeyPath`**  
 Optional. The full path to the public key used to authenticate connections to the device under test. When you specify the `publicKeyPath`, IDT validates the device’s public key when it establishes an SSH connection to the device under test. If this value is not specified, IDT creates an SSH connection, but doesn’t validate the device’s public key.   
We strongly recommend that you specify the path to the public key, and that you use a secure method to fetch this public key. For standard command line-based SSH clients, the public key is provided in the `known_hosts` file. If you specify a separate public key file, this file must use the same format as the `known_hosts` file, that is, `ip-address key-type public-key`.   
**`connectivity.auth`**  
Authentication information for the connection.  
This property applies only if `connectivity.protocol` is set to `ssh`.    
**`connectivity.auth.method`**  
The authentication method used to access a device over the given connectivity protocol.  
Supported values are:  
+ `pki`
+ `password`  
**`connectivity.auth.credentials`**  
The credentials used for authentication.    
**`connectivity.auth.credentials.password`**  
The password used for signing in to the device being tested.  
This value applies only if `connectivity.auth.method` is set to `password`.  
**`connectivity.auth.credentials.privKeyPath`**  
The full path to the private key used to sign in to the device under test.  
This value applies only if `connectivity.auth.method` is set to `pki`.  
**`connectivity.auth.credentials.user`**  
The user name for signing in to the device being tested.  
**`connectivity.serialPort`**  
Optional. The serial port to which the device is connected.  
This property applies only if `connectivity.protocol` is set to `uart`.  
**`connectivity.containerId`**  
The container ID or name of the Docker container being tested.  
This property applies only if `connectivity.protocol` is set to `docker`.  
**`connectivity.containerUser`**  
Optional. The name of the user to user inside the container. The default value is the user provided in the Dockerfile.  
The default value is 22.  
This property applies only if `connectivity.protocol` is set to `docker`.
To check if test runners configure the incorrect device connection for a test, you can retrieve `pool.Devices[0].Connectivity.Protocol` from the state machine context and compare it to the expected value in a `Choice` state. If an incorrect protocol is used, then print a message using the `LogMessage` state and transition to the `Fail` state.  
Alternatively, you can use error handling code to report a test failure for incorrect device types.

## (Optional) Configure userdata.json


The `userdata.json` file contains any additional information that is required by a test suite but is not specified in the `device.json` file. The format of this file depends on the [`userdata_scheme.json` file](idt-json-config.md#userdata-schema-json) that is defined in the test suite. If you are a test writer, make sure you provide this information to users who will run the test suites that you write.

## (Optional) Configure resource.json


The `resource.json` file contains information about any devices that will be used as resource devices. Resource devices are devices that are required to test certain capabilities of a device under test. For example, to test a device's Bluetooth capability, you might use a resource device to test that your device can connect to it successfully. Resource devices are optional, and you can require as many resources devices as you need. As a test writer, you use the [test.json file](idt-json-config.md#test-json) to define the resource device features that are required for a test. Test runners then use the `resource.json` file to provide a pool of resource devices that have the required features. Make sure you provide this information to users who will run the test suites that you write. 

Test runners can provide this information using the following template `resource.json` file located in the `<device-tester-extract-location>/configs/` folder.

```
[
    {
        "id": "<pool-id>",
        "features": [
            {
                "name": "<feature-name>",             
                "version": "<feature-value>",                
                "jobSlots": <job-slots>
            }
        ],     
        "devices": [
            {
                "id": "<device-id>",              
                "connectivity": {
                    "protocol": "ssh | uart | docker",                   
                    // ssh
                    "ip": "<ip-address>",
                    "port": <port-number>,
                    "publicKeyPath": "<public-key-path>",
                    "auth": {
                        "method": "pki | password",
                        "credentials": {
                            "user": "<user-name>", 
                            // pki
                            "privKeyPath": "/path/to/private/key",
                                         
                            // password
                            "password": "<password>",
                        }
                    },
                    
                    // uart
                    "serialPort": "<serial-port>",
                    
                    // docker
                    "containerId": "<container-id>",
                    "containerUser": "<container-user-name>",
                }
            }
        ]
    }
]
```

All fields that contain values are required as described here:

**`id`**  
A user-defined alphanumeric ID that uniquely identifies a collection of devices called a *device pool*. Devices that belong to a pool must have identical hardware. When you run a suite of tests, devices in the pool are used to parallelize the workload. Multiple devices are used to run different tests.

**`features`**  
Optional. An array that contains the device's supported features. The information required in this field is defined in the [test.json files](idt-json-config.md#test-json) in the test suite and determines which tests to run and how to run those tests. If the test suite does not require any features, then this field is not required.    
**`features.name`**  
The name of the feature.  
**`features.version`**  
The feature version.  
**`features.jobSlots`**  
Setting to indicate how many tests can concurrently use the device. The default value is `1`.

**`devices`**  <a name="device-array"></a>
An array of devices in the pool to be tested. At least one device is required.    
**`devices.id`**  
A user-defined unique identifier for the device being tested.  
**`connectivity.protocol`**  
The communication protocol used to communicate with this device. Each device in a pool must use the same protocol.  
Currently, the only supported values are `ssh` and `uart` for physical devices, and `docker` for Docker containers.  
**`connectivity.ip`**  
The IP address of the device being tested.  
This property applies only if `connectivity.protocol` is set to `ssh`.  
**`connectivity.port`**  
Optional. The port number to use for SSH connections.  
The default value is 22.  
This property applies only if `connectivity.protocol` is set to `ssh`.  
**`connectivity.publicKeyPath`**  
 Optional. The full path to the public key used to authenticate connections to the device under test. When you specify the `publicKeyPath`, IDT validates the device’s public key when it establishes an SSH connection to the device under test. If this value is not specified, IDT creates an SSH connection, but doesn’t validate the device’s public key.   
We strongly recommend that you specify the path to the public key, and that you use a secure method to fetch this public key. For standard command line-based SSH clients, the public key is provided in the `known_hosts` file. If you specify a separate public key file, this file must use the same format as the `known_hosts` file, that is, `ip-address key-type public-key`.   
**`connectivity.auth`**  
Authentication information for the connection.  
This property applies only if `connectivity.protocol` is set to `ssh`.    
**`connectivity.auth.method`**  
The authentication method used to access a device over the given connectivity protocol.  
Supported values are:  
+ `pki`
+ `password`  
**`connectivity.auth.credentials`**  
The credentials used for authentication.    
**`connectivity.auth.credentials.password`**  
The password used for signing in to the device being tested.  
This value applies only if `connectivity.auth.method` is set to `password`.  
**`connectivity.auth.credentials.privKeyPath`**  
The full path to the private key used to sign in to the device under test.  
This value applies only if `connectivity.auth.method` is set to `pki`.  
**`connectivity.auth.credentials.user`**  
The user name for signing in to the device being tested.  
**`connectivity.serialPort`**  
Optional. The serial port to which the device is connected.  
This property applies only if `connectivity.protocol` is set to `uart`.  
**`connectivity.containerId`**  
The container ID or name of the Docker container being tested.  
This property applies only if `connectivity.protocol` is set to `docker`.  
**`connectivity.containerUser`**  
Optional. The name of the user to user inside the container. The default value is the user provided in the Dockerfile.  
The default value is 22.  
This property applies only if `connectivity.protocol` is set to `docker`.

## (Optional) Configure config.json


The `config.json` file contains configuration information for IDT. Typically, test runners will not need to modify this file except to provide their Amazon user credentials for IDT, and optionally, an Amazon region. If Amazon credentials with required permissions are provided Amazon IoT Device Tester collects and submits usage metrics to Amazon. This is an opt-in feature and is used to improve IDT functionality. For more information, see [Submit IDT usage metrics](idt-usage-metrics.md).

Test runners can configure their Amazon credentials in one of the following ways:
+ **Credentials file**

  IDT uses the same credentials file as the Amazon CLI. For more information, see [Configuration and credential files](https://docs.amazonaws.cn/cli/latest/userguide/cli-config-files.html).

  The location of the credentials file varies, depending on the operating system you are using:
  + macOS, Linux: `~/.aws/credentials`
  + Windows: `C:\Users\UserName\.aws\credentials`
+ **Environment variables**

  Environment variables are variables maintained by the operating system and used by system commands. Variables defined during an SSH session are not available after that session is closed. IDT can use the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables to store Amazon credentials

  To set these variables on Linux, macOS, or Unix, use **export**:

  ```
  export AWS_ACCESS_KEY_ID=<your_access_key_id>
  export AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
  ```

  To set these variables on Windows, use **set**:

  ```
  set AWS_ACCESS_KEY_ID=<your_access_key_id>
  set AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
  ```

To configure Amazon credentials for IDT, test runners edit the `auth` section in the `config.json` file located in the `<device-tester-extract-location>/configs/` folder.

```
{
    "log": {
        "location": "logs"
    },
    "configFiles": {
        "root": "configs",
        "device": "configs/device.json"
    },
    "testPath": "tests",
    "reportPath": "results",
    "awsRegion": "<region>",
    "auth": {
        "method": "file | environment",
        "credentials": {
            "profile": "<profile-name>"
        }
    }
}
]
```

All fields that contain values are required as described here:

**Note**  
All paths in this file are defined relative to the *<device-tester-extract-location>*.

**`log.location`**  
The path to the logs folder in the *<device-tester-extract-location>*.

**`configFiles.root`**  
The path to the folder that contains the configuration files.

**`configFiles.device`**  
The path to the `device.json` file.

**`testPath`**  
The path to the folder that contains test suites.

**`reportPath`**  
The path to the folder that will contain test results after IDT runs a test suite.

**`awsRegion`**  
Optional. The Amazon region that test suites will use. If not set, then test suites will use the default region specified in each test suite.

**`auth.method`**  
The method IDT uses to retrieve Amazon credentials. Supported values are `file` to retrieve credentials from a credentials file, and `environment` to retrieve credentials using environment variables.

**`auth.credentials.profile`**  
The credentials profile to use from the credentials file. This property applies only if `auth.method` is set to `file`.

# Debug and run custom test suites


After the [required configuration](set-config-custom.md) is set, IDT can run your test suite. The runtime of the full test suite depends on the hardware and the composition of the test suite. For reference, it takes approximately 30 minutes to complete the full FreeRTOS qualification test suite on a Raspberry Pi 3B.

As you write your test suite, you can use IDT to run the test suite in debug mode to check your code before you run it or provide it to test runners.

## Run IDT in debug mode


Because test suites depend on IDT to interact with devices, provide the context, and receive results, you cannot simply debug your test suites in an IDE without any IDT interaction. To do so, the IDT CLI provides the `debug-test-suite` command that lets you run IDT in debug mode. Run the following command to view the available options for `debug-test-suite`:

```
devicetester_[linux | mac | win_x86-64] debug-test-suite -h
```

When you run IDT in debug mode, IDT does not actually launch the test suite or run the test orchestrator; instead, it interacts with your IDE to responds to requests made from the test suite running in the IDE and prints the logs to the console. IDT does not time out and waits to exit until manually interrupted. In debug mode, IDT also does not run the test orchestrator and will not generate any report files. To debug your test suite, you must use your IDE to provide some information that IDT usually obtains from the configuration files. Make sure you provide the following information:
+ Environment variables and arguments for each test. IDT will not read this information from `test.json` or `suite.json`.
+ Arguments to select resource devices. IDT will not read this information from `test.json`.

To debug your test suites, complete the following steps:

1.  Create the setting configuration files that are required to run the test suite. For example, if your test suite requires the `device.json`, `resource.json`, and `user data.json`, make sure you configure all of them as needed. 

1. Run the following command to place IDT in debug mode and select any devices that are required to run the test.

   ```
   devicetester_[linux | mac | win_x86-64] debug-test-suite [options]
   ```

   After you run this command, IDT waits for requests from the test suite and then responds to them. IDT also generates the environment variables that are required for the case process for the IDT Client SDK. 

1. In your IDE, use the `run` or `debug` configuration to do the following:

   1. Set the values of the IDT-generated environment variables.

   1. Set the value of any environment variables or arguments that you specified in your `test.json` and `suite.json` file.

   1. Set breakpoints as needed.

1. Run the test suite in your IDE. 

   You can debug and re-run the test suite as many times as needed. IDT does not time out in debug mode.

1.  After you complete debugging, interrupt IDT to exit debug mode.

## IDT CLI commands to run tests


The following section describes the IDT CLI commands:

------
#### [ IDT v4.0.0 ]

**`help`**  <a name="idt-command-help"></a>
Lists information about the specified command.

**`list-groups`**  <a name="idt-command-list-groups"></a>
Lists the groups in a given test suite.

**`list-suites`**  <a name="idt-command-list-suites"></a>
Lists the available test suites.

**`list-supported-products`**  
Lists the supported products for your version of IDT, in this case FreeRTOS versions, and FreeRTOS qualification test suite versions available for the current IDT version.

**`list-test-cases`**  
Lists the test cases in a given test group. The following option is supported:  
+ `group-id`. The test group to search for. This option is required and must specify a single group.

**`run-suite`**  
Runs a suite of tests on a pool of devices. The following are some commonly used options:  
+ `suite-id`. The test suite version to run. If not specified, IDT uses the latest version in the `tests` folder.
+ `group-id`. The test groups to run, as a comma-separated list. If not specified, IDT runs all test groups in the test suite.
+ `test-id`. The test cases to run, as a comma-separated list. When specified, `group-id` must specify a single group.
+ `pool-id`. The device pool to test. Test runners must specify a pool if they have multiple device pools defined in your `device.json` file.
+ `timeout-multiplier`. Configures IDT to modify the test execution timeout specified in the `test.json` file for a test with a user-defined multiplier.
+ `stop-on-first-failure`. Configures IDT to stop execution on the first failure. This option should be used with `group-id` to debug the specified test groups.
+ `userdata`. Sets the file that contains user data information required to run the test suite. This is required only if `userdataRequired` is set to true in the `suite.json` file for the test suite.
For more information about `run-suite` options, use the `help` option:  

```
devicetester_[linux | mac | win_x86-64] run-suite -h
```

**`debug-test-suite`**  
Run the test suite in debug mode. For more information, see [Run IDT in debug mode](#idt-debug-mode).

------

# Review IDT test results and logs


This section describes the format in which IDT generates console logs and test reports.

## Console message format


Amazon IoT Device Tester uses a standard format for printing messages to the console when it starts a test suite. The following excerpt shows an example of a console message generated by IDT.

```
[INFO] [2000-01-02 03:04:05]: Using suite: MyTestSuite_1.0.0 executionId=9a52f362-1227-11eb-86c9-8c8590419f30
```

Most console messages consist of the following fields:

**`time`**  
A full ISO 8601 timestamp for the logged event.

**`level`**  
The message level for the logged event. Typically, the logged message level is one of `info`, `warn`, or `error`. IDT issues a `fatal` or `panic` message if it encounters an expected event that causes it to exit early.

**`msg`**  
The logged message. 

**`executionId`**  
A unique ID string for the current IDT process. This ID is used to differentiate between individual IDT runs.

Console messages generated from a test suite provide additional information about the device under test and the test suite, test group, and test cases that IDT runs. The following excerpt shows an example of a console message generated from a test suite.

```
[INFO] [2000-01-02 03:04:05]: Hello world! suiteId=MyTestSuitegroupId=myTestGroup testCaseId=myTestCase deviceId=my-deviceexecutionId=9a52f362-1227-11eb-86c9-8c8590419f30
```

The test-suite specific part of the console message contains the following fields:

**`suiteId`**  
The name of the test suite currently running.

**`groupId`**  
The ID of the test group currently running.

**`testCaseId`**  
The ID of the test case current running. 

**`deviceId`**  
A ID of the device under test that the current test case is using.

The test summary contains information about the test suite, the test results for each group that was run, and the locations of the generated logs and report files. The following example shows a test summary message.

```
========== Test Summary ==========
Execution Time:     5m00s
Tests Completed:    4
Tests Passed:       3
Tests Failed:       1
Tests Skipped:      0
----------------------------------
Test Groups:
    GroupA:         PASSED
    GroupB:         FAILED
----------------------------------
Failed Tests:
    Group Name: GroupB
        Test Name: TestB1
            Reason: Something bad happened
----------------------------------
Path to Amazon IoT Device Tester Report: /path/to/awsiotdevicetester_report.xml
Path to Test Execution Logs: /path/to/logs
Path to Aggregated JUnit Report: /path/to/MyTestSuite_Report.xml
```

## Amazon IoT Device Tester report schema


 `awsiotdevicetester_report.xml` is a signed report that contains the following information: 
+ The IDT version.
+ The test suite version.
+ The report signature and key used to sign the report.
+ The device SKU and the device pool name specified in the `device.json` file.
+ The product version and the device features that were tested.
+ The aggregate summary of test results. This information is the same as that contained in the `suite-name_report.xml` file.

```
<apnreport>
    <awsiotdevicetesterversion>idt-version</awsiotdevicetesterversion>
    <testsuiteversion>test-suite-version</testsuiteversion>
    <signature>signature</signature>
    <keyname>keyname</keyname>
    <session>
        <testsession>execution-id</testsession>
        <starttime>start-time</starttime>
        <endtime>end-time</endtime>
    </session>
    <awsproduct>
        <name>product-name</name>
        <version>product-version</version>
        <features>
            <feature name="<feature-name>" value="supported | not-supported | <feature-value>" type="optional | required"/>
        </features>
    </awsproduct>
    <device>
        <sku>device-sku</sku>
        <name>device-name</name>
        <features>
            <feature name="<feature-name>" value="<feature-value>"/>
        </features>
        <executionMethod>ssh | uart | docker</executionMethod>
    </device>
    <devenvironment>
        <os name="<os-name>"/>
    </devenvironment>
    <report>
        <suite-name-report-contents>
    </report>
</apnreport>
```

The `awsiotdevicetester_report.xml` file contains an `<awsproduct>` tag that contains information about the product being tested and the product features that were validated after running a suite of tests.

**Attributes used in the `<awsproduct>` tag**

**`name`**  
The name of the product being tested.

**`version`**  
The version of the product being tested.

**`features`**  
The features validated. Features marked as `required` are required for the test suite to validate the device. The following snippet shows how this information appears in the `awsiotdevicetester_report.xml` file.  

```
<feature name="ssh" value="supported" type="required"></feature>
```
Features marked as `optional` are not required for validation. The following snippets show optional features.  

```
<feature name="hsi" value="supported" type="optional"></feature>
<feature name="mqtt" value="not-supported" type="optional"></feature>
```

## Test suite report schema


The `suite-name_Result.xml` report is in [JUnit XML format](https://llg.cubic.org/docs/junit/). You can integrate it into continuous integration and deployment platforms like [Jenkins](https://jenkins.io/), [Bamboo](https://www.atlassian.com/software/bamboo), and so on. The report contains an aggregate summary of test results.

```
<testsuites name="<suite-name> results" time="<run-duration>" tests="<number-of-test>" failures="<number-of-tests>" skipped="<number-of-tests>" errors="<number-of-tests>" disabled="0">
    <testsuite name="<test-group-id>" package="" tests="<number-of-tests>" failures="<number-of-tests>" skipped="<number-of-tests>" errors="<number-of-tests>" disabled="0">
        <!--success-->
        <testcase classname="<classname>" name="<name>" time="<run-duration>"/>
        <!--failure-->
        <testcase classname="<classname>" name="<name>" time="<run-duration>">
            <failure type="<failure-type>">
                reason
            </failure>
        </testcase>
        <!--skipped-->
        <testcase classname="<classname>" name="<name>" time="<run-duration>">
            <skipped>
                reason
            </skipped>
        </testcase>
        <!--error-->
        <testcase classname="<classname>" name="<name>" time="<run-duration>">
            <error>
                reason
            </error>
        </testcase>
    </testsuite>
</testsuites>
```

The report section in both the `awsiotdevicetester_report.xml` or `suite-name_report.xml` lists the tests that were run and the results.

The first XML tag `<testsuites>` contains the summary of the test execution. For example:

```
<testsuites name="MyTestSuite results" time="2299" tests="28" failures="0" errors="0" disabled="0">
```

**Attributes used in the `<testsuites>` tag**

**`name`**  
The name of the test suite.

**`time`**  
The time, in seconds, it took to run the test suite.

**`tests`**  
The number of tests executed.

**`failures`**  
The number of tests that were run, but did not pass.

**`errors`**  
The number of tests that IDT couldn't execute.

**`disabled`**  
This attribute is not used and can be ignored.

In the event of test failures or errors, you can identify the test that failed by reviewing the `<testsuites>` XML tags. The `<testsuite>` XML tags inside the `<testsuites>` tag show the test result summary for a test group. For example:

```
<testsuite name="combination" package="" tests="1" failures="0" time="161" disabled="0" errors="0" skipped="0">
```

The format is similar to the `<testsuites>` tag, but with a `skipped` attribute that is not used and can be ignored. Inside each `<testsuite>` XML tag, there are `<testcase>` tags for each executed test for a test group. For example:

```
<testcase classname="Security Test" name="IP Change Tests" attempts="1"></testcase>
```

**Attributes used in the `<testcase>` tag**

**`name`**  
The name of the test.

**`attempts`**  
The number of times IDT executed the test case.

When a test fails or an error occurs, `<failure>` or `<error>` tags are added to the `<testcase>` tag with information for troubleshooting. For example:

```
<testcase classname="mcu.Full_MQTT" name="MQTT_TestCase" attempts="1">
	<failure type="Failure">Reason for the test failure</failure>
	<error>Reason for the test execution error</error>
</testcase>
```

# Submit IDT usage metrics


If you provide Amazon credentials with required permissions, Amazon IoT Device Tester collects and submits usage metrics to Amazon. This is an opt-in feature and is used to improve IDT functionality. IDT collects information such as the following: 
+ The Amazon account ID used to run IDT
+  The IDT CLI commands used to run tests
+ The test suite that are run
+ The test suites in the *<device-tester-extract-location>* folder
+ The number of devices configured in the device pool
+ Test case names and run times
+ Test result information, such as whether tests passed, failed, encountered errors, or were skipped
+ Product features tested
+ IDT exit behavior, such as unexpected or early exits 

 All of the information that IDT sends is also logged to a `metrics.log` file in the `<device-tester-extract-location>/results/<execution-id>/` folder. You can view the log file to see the information that was collected during a test run. This file is generated only if you choose to collect usage metrics. 

To disable metrics collection, you do not need to take additional action. Simply do not store your Amazon credentials, and if you do have stored Amazon credentials, do not configure the `config.json` file to access them. 

## Sign up for an Amazon Web Services account


If you do not have an Amazon Web Services account, use the following procedure to create one.

**To sign up for Amazon Web Services**

1. Open [http://www.amazonaws.cn/](http://www.amazonaws.cn/) and choose **Sign Up**.

1. Follow the on-screen instructions.

Amazon sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [http://www.amazonaws.cn/](http://www.amazonaws.cn/) and choosing **My Account**.

## Secure IAM users


After you sign up for an Amazon Web Services account, safeguard your administrative user by turning on multi-factor authentication (MFA). For instructions, see [Enable a virtual MFA device for an IAM user (console)](https://docs.amazonaws.cn/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-iam-user) in the *IAM User Guide*.

To give other users access to your Amazon Web Services account resources, create IAM users. To secure your IAM users, turn on MFA and only give the IAM users the permissions needed to perform their tasks.

For more information about creating and securing IAM users, see the following topics in the *IAM User Guide*: 
+ [Creating an IAM user in your Amazon Web Services account](https://docs.amazonaws.cn//IAM/latest/UserGuide/id_users_create.html)
+ [Access management for Amazon resources](https://docs.amazonaws.cn/IAM/latest/UserGuide/access.html)
+ [Example IAM identity-based policies](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_examples.html)

To provide access, add permissions to your users, groups, or roles:
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.amazonaws.cn//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.amazonaws.cn//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.amazonaws.cn//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Provide Amazon credentials to IDT


To allow IDT to access your Amazon credentials and submit metrics to Amazon, do the following:

1. Store the Amazon credentials for your IAM user as environment variables or in a credentials file:

   1. To use environment variables, run the following command:

      ```
      AWS_ACCESS_KEY_ID=access-key
      AWS_SECRET_ACCESS_KEY=secret-access-key
      ```

   1. To use the credentials file, add the following information to the `.aws/credentials file:`

      ```
      [profile-name]
      aws_access_key_id=access-key
      aws_secret_access_key=secret-access-key
      ```

1. Configure the `auth` section of the `config.json` file. For more information, see [(Optional) Configure config.json](set-config-custom.md#config-json-custom).