

# Get started (Scala)
<a name="examples-gs-scala"></a>

**Note**  
Starting from version 1.15, Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally, but doesn't expose Scala into the user code classloader. Because of that, you must add Scala dependencies into your JAR-archives.  
For more information about Scala changes in Flink 1.15, see [ Scala Free in One Fifteen](https://flink.apache.org/2022/02/22/scala-free.html).

In this exercise, you create a Managed Service for Apache Flink application for Scala with a Kinesis stream as a source and a sink. 

**Topics**
+ [Create dependent resources](#examples-gs-scala-resources)
+ [Write sample records to the input stream](#examples-gs-scala-write)
+ [Download and examine the application code](#examples-gs-scala-download)
+ [Compile and upload the application code](#examples-gs-scala-upload)
+ [Create and run the application (console)](gs-scala-7.md)
+ [Create and run the application (CLI)](examples-gs-scala-create-run-cli.md)
+ [Clean up Amazon resources](examples-gs-scala-cleanup.md)

## Create dependent resources
<a name="examples-gs-scala-resources"></a>

Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: 
+ Two Kinesis streams for input and output.
+ An Amazon S3 bucket to store the application's code (`ka-app-code-<username>`) 

You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
+ [Creating and Updating Data Streams](https://docs.amazonaws.cn/kinesis/latest/dev/amazon-kinesis-streams.html) in the *Amazon Kinesis Data Streams Developer Guide*. Name your data streams **ExampleInputStream** and **ExampleOutputStream**.

  To create the data streams (Amazon CLI)
  + To create the first stream (`ExampleInputStream`), use the following Amazon Kinesis create-stream Amazon CLI command.

    ```
    aws kinesis create-stream \
        --stream-name ExampleInputStream \
        --shard-count 1 \
        --region us-west-2 \
        --profile adminuser
    ```
  + To create the second stream that the application uses to write output, run the same command, changing the stream name to `ExampleOutputStream`.

    ```
    aws kinesis create-stream \
        --stream-name ExampleOutputStream \
        --shard-count 1 \
        --region us-west-2 \
        --profile adminuser
    ```
+ [How Do I Create an S3 Bucket?](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket.html) in the *Amazon Simple Storage Service User Guide*. Give the Amazon S3 bucket a globally unique name by appending your login name, such as **ka-app-code-*<username>***.

**Other resources**

When you create your application, Managed Service for Apache Flink creates the following Amazon CloudWatch resources if they don't already exist:
+ A log group called `/AWS/KinesisAnalytics-java/MyApplication`
+ A log stream called `kinesis-analytics-log-stream`

## Write sample records to the input stream
<a name="examples-gs-scala-write"></a>

In this section, you use a Python script to write sample records to the stream for the application to process.

**Note**  
This section requires the [Amazon SDK for Python (Boto)](http://www.amazonaws.cn/developers/getting-started/python/).

**Note**  
The Python script in this section uses the Amazon CLI. You must configure your Amazon CLI to use your account credentials and default region. To configure your Amazon CLI, enter the following:  

```
aws configure
```

1. Create a file named `stock.py` with the following contents:

   ```
   import datetime
   import json
   import random
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {
           'event_time': datetime.datetime.now().isoformat(),
           'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
           'price': round(random.random() * 100, 2)}
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name,
               Data=json.dumps(data),
               PartitionKey="partitionkey")
   
   
   if __name__ == '__main__':
       generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
   ```

1. Run the `stock.py` script: 

   ```
   $ python stock.py
   ```

   Keep the script running while completing the rest of the tutorial.

## Download and examine the application code
<a name="examples-gs-scala-download"></a>

The Python application code for this example is available from GitHub. To download the application code, do the following:

1. Install the Git client if you haven't already. For more information, see [Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). 

1. Clone the remote repository with the following command:

   ```
   git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
   ```

1. Navigate to the `amazon-kinesis-data-analytics-java-examples/scala/GettingStarted` directory.

Note the following about the application code:
+ A `build.sbt` file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
+ The `BasicStreamingJob.scala` file contains the main method that defines the application's functionality.
+ The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:

  ```
  private def createSource: FlinkKinesisConsumer[String] = {
    val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
    val inputProperties = applicationProperties.get("ConsumerConfigProperties")
  
    new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName),
      new SimpleStringSchema, inputProperties)
  }
  ```

  The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink:

  ```
  private def createSink: KinesisStreamsSink[String] = {
    val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
    val outputProperties = applicationProperties.get("ProducerConfigProperties")
  
    KinesisStreamsSink.builder[String]
      .setKinesisClientProperties(outputProperties)
      .setSerializationSchema(new SimpleStringSchema)
      .setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName))
      .setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode))
      .build
  }
  ```
+ The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object.
+ The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see [Runtime Properties](https://docs.aws.amazon.com/managed-flink/latest/java/how-properties.html).

## Compile and upload the application code
<a name="examples-gs-scala-upload"></a>

In this section, you compile and upload your application code to the Amazon S3 bucket you created in the [Create dependent resources](#examples-gs-scala-resources) section.

**Compile the Application Code**

In this section, you use the [SBT](https://www.scala-sbt.org/) build tool to build the Scala code for the application. To install SBT, see [Install sbt with cs setup](https://www.scala-sbt.org/download.html). You also need to install the Java Development Kit (JDK). See [Prerequisites for Completing the Exercises](https://docs.amazonaws.cn/managed-flink/latest/java/getting-started.html#setting-up-prerequisites).

1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT:

   ```
   sbt assembly
   ```

1. If the application compiles successfully, the following file is created:

   ```
   target/scala-3.2.0/getting-started-scala-1.0.jar
   ```

**Upload the Apache Flink Streaming Scala Code**

In this section, you create an Amazon S3 bucket and upload your application code.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Create bucket**

1. Enter `ka-app-code-<username>` in the **Bucket name** field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose **Next**.

1. In **Configure options**, keep the settings as they are, and choose **Next**.

1. In **Set permissions**, keep the settings as they are, and choose **Next**.

1. Choose **Create bucket**.

1. Choose the `ka-app-code-<username>` bucket, and then choose **Upload**.

1. In the **Select files** step, choose **Add files**. Navigate to the `getting-started-scala-1.0.jar` file that you created in the previous step. 

1. You don't need to change any of the settings for the object, so choose **Upload**.

Your application code is now stored in an Amazon S3 bucket where your application can access it.

# Create and run the application (console)
<a name="gs-scala-7"></a>

Follow these steps to create, configure, update, and run the application using the console.

## Create the Application
<a name="gs-scala-7-console-create"></a>

1. Sign in to the Amazon Web Services Management Console, and open the Amazon MSF console at https://console.aws.amazon.com/flink.

1. On the Managed Service for Apache Flink dashboard, choose **Create analytics application**.

1. On the **Managed Service for Apache Flink - Create application** page, provide the application details as follows:
   + For **Application name**, enter **MyApplication**.
   + For **Description**, enter **My scala test app**.
   + For **Runtime**, choose **Apache Flink**.
   + Keep the version as **Apache Flink version 1.19.1**.

1. For **Access permissions**, choose **Create / update IAM role `kinesis-analytics-MyApplication-us-west-2`**.

1. Choose **Create application**.

**Note**  
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:  
Policy: `kinesis-analytics-service-MyApplication-us-west-2`
Role: `kinesisanalytics-MyApplication-us-west-2`

## Configure the application
<a name="gs-scala-7-console-configure"></a>

Use the following procedure to configure the application.

**To configure the application**

1. On the **MyApplication** page, choose **Configure**.

1. On the **Configure application** page, provide the **Code location**:
   + For **Amazon S3 bucket**, enter **ka-app-code-*<username>***.
   + For **Path to Amazon S3 object**, enter **getting-started-scala-1.0.jar.**.

1. Under **Access to application resources**, for **Access permissions**, choose **Create / update IAM role `kinesis-analytics-MyApplication-us-west-2`**.

1. Under **Properties**, choose **Add group**. 

1. Enter the following:    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/managed-flink/latest/java/gs-scala-7.html)

   Choose **Save**.

1. Under **Properties**, choose **Add group** again. 

1. Enter the following:    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/managed-flink/latest/java/gs-scala-7.html)

1. Under **Monitoring**, ensure that the **Monitoring metrics level** is set to **Application**.

1. For **CloudWatch logging**, choose the **Enable** check box.

1. Choose **Update**.

**Note**  
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:   
Log group: `/aws/kinesis-analytics/MyApplication`
Log stream: `kinesis-analytics-log-stream`

## Edit the IAM policy
<a name="gs-scala-7-console-iam"></a>

Edit the IAM policy to add permissions to access the Amazon S3 bucket.

**To edit the IAM policy to add S3 bucket permissions**

1. Open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/).

1. Choose **Policies**. Choose the **`kinesis-analytics-service-MyApplication-us-west-2`** policy that the console created for you in the previous section. 

1. On the **Summary** page, choose **Edit policy**. Choose the **JSON** tab.

1. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (*012345678901*) with your account ID.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "ReadCode",
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject",
                   "s3:GetObjectVersion"
               ],
               "Resource": [
                   "arn:aws-cn:s3:::ka-app-code-username/getting-started-scala-1.0.jar"
               ]
           },
           {
               "Sid": "DescribeLogGroups",
               "Effect": "Allow",
               "Action": [
                   "logs:DescribeLogGroups"
               ],
               "Resource": [
                   "arn:aws-cn:logs:us-west-2:012345678901:log-group:*"
               ]
           },
           {
               "Sid": "DescribeLogStreams",
               "Effect": "Allow",
               "Action": [
                   "logs:DescribeLogStreams"
               ],
               "Resource": [
                   "arn:aws-cn:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
               ]
           },
           {
               "Sid": "PutLogEvents",
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents"
               ],
               "Resource": [
                   "arn:aws-cn:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
               ]
           },
           {
               "Sid": "ReadInputStream",
               "Effect": "Allow",
               "Action": "kinesis:*",
               "Resource": "arn:aws-cn:kinesis:us-west-2:012345678901:stream/ExampleInputStream"
           },
           {
               "Sid": "WriteOutputStream",
               "Effect": "Allow",
               "Action": "kinesis:*",
               "Resource": "arn:aws-cn:kinesis:us-west-2:012345678901:stream/ExampleOutputStream"
           }
       ]
   }
   ```

------

## Run the application
<a name="gs-scala-7-console-run"></a>

The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.

## Stop the application
<a name="gs-scala-7-console-stop"></a>

To stop the application, on the **MyApplication** page, choose **Stop**. Confirm the action.

# Create and run the application (CLI)
<a name="examples-gs-scala-create-run-cli"></a>

In this section, you use the Amazon Command Line Interface to create and run the Managed Service for Apache Flink application. Use the *kinesisanalyticsv2* Amazon CLI command to create and interact with Managed Service for Apache Flink applications.

## Create a permissions policy
<a name="examples-gs-scala-permissions"></a>

**Note**  
You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. 

First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.

Use the following code to create the `AKReadSourceStreamWriteSinkStream` permissions policy. Replace **username** with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) **(012345678901)** with your account ID.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ReadCode",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws-cn:s3:::ka-app-code-username/getting-started-scala-1.0.jar"
            ]
        },
        {
            "Sid": "DescribeLogGroups",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups"
            ],
            "Resource": [
                "arn:aws-cn:logs:us-west-2:123456789012:*"
            ]
        },
        {
            "Sid": "DescribeLogStreams",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogStreams"
            ],
            "Resource": [
                "arn:aws-cn:logs:us-west-2:123456789012:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
            ]
        },
        {
            "Sid": "PutLogEvents",
            "Effect": "Allow",
            "Action": [
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws-cn:logs:us-west-2:123456789012:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
            ]
        },
        {
            "Sid": "ReadInputStream",
            "Effect": "Allow",
            "Action": "kinesis:*",
            "Resource": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/ExampleInputStream"
        },
        {
            "Sid": "WriteOutputStream",
            "Effect": "Allow",
            "Action": "kinesis:*",
            "Resource": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/ExampleOutputStream"
        }
    ]
}
```

------

For step-by-step instructions to create a permissions policy, see [Tutorial: Create and Attach Your First Customer Managed Policy](https://docs.amazonaws.cn/IAM/latest/UserGuide/tutorial_managed-policies.html#part-two-create-policy) in the *IAM User Guide*.

## Create an IAM policy
<a name="examples-gs-scala-iam-policy"></a>

In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.

Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.

You attach the permissions policy that you created in the preceding section to this role. 

**To create an IAM role**

1. Open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/).

1. In the navigation pane, choose **Roles** and then **Create Role**.

1. Under **Select type of trusted identity**, choose **Amazon Service**

1. Under **Choose the service that will use this role**, choose **Kinesis**.

1. Under **Select your use case**, choose **Managed Service for Apache Flink**.

1. Choose **Next: Permissions**.

1. On the **Attach permissions policies** page, choose **Next: Review**. You attach permissions policies after you create the role.

1. On the **Create role** page, enter **MF-stream-rw-role** for the **Role name**. Choose **Create role**. 

    Now you have created a new IAM role called `MF-stream-rw-role`. Next, you update the trust and permissions policies for the role

1. Attach the permissions policy to the role.
**Note**  
For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, [Create a Permissions Policy](https://docs.amazonaws.cn/managed-flink/latest/java/get-started-exercise.html#get-started-exercise-7-cli-policy).

   1. On the **Summary** page, choose the **Permissions** tab.

   1. Choose **Attach Policies**.

   1. In the search box, enter **AKReadSourceStreamWriteSinkStream** (the policy that you created in the previous section). 

   1. Choose the `AKReadSourceStreamWriteSinkStream` policy, and choose **Attach policy**.

You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.

For step-by-step instructions for creating a role, see [Creating an IAM Role (Console)](https://docs.amazonaws.cn/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-user-console) in the *IAM User Guide*.

## Create the application
<a name="examples-gs-scala-create-application-cli"></a>

Save the following JSON code to a file named `create_request.json`. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID.

```
{
    "ApplicationName": "getting_started",
    "ApplicationDescription": "Scala getting started application",
    "RuntimeEnvironment": "FLINK-1_19",
    "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role",
    "ApplicationConfiguration": {
        "ApplicationCodeConfiguration": {
            "CodeContent": {
                "S3ContentLocation": {
                    "BucketARN": "arn:aws:s3:::ka-app-code-username",
                    "FileKey": "getting-started-scala-1.0.jar"
                }
            },
            "CodeContentType": "ZIPFILE"
        },
        "EnvironmentProperties":  { 
         "PropertyGroups": [ 
            { 
               "PropertyGroupId": "ConsumerConfigProperties",
               "PropertyMap" : {
                    "aws.region" : "us-west-2",
                    "stream.name" : "ExampleInputStream",
                    "flink.stream.initpos" : "LATEST"
               }
            },
            { 
               "PropertyGroupId": "ProducerConfigProperties",
               "PropertyMap" : {
                    "aws.region" : "us-west-2",
                    "stream.name" : "ExampleOutputStream"
               }
            }
         ]
      }
    },
    "CloudWatchLoggingOptions": [ 
      { 
         "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
      }
   ]
}
```

Execute the [CreateApplication](https://docs.amazonaws.cn/managed-flink/latest/apiv2/API_CreateApplication.html) with the following request to create the application:

```
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
```

The application is now created. You start the application in the next step.

## Start the application
<a name="examples-gs-scala-start"></a>

In this section, you use the [StartApplication](https://docs.amazonaws.cn/managed-flink/latest/apiv2/API_StartApplication.html) action to start the application.

**To start the application**

1. Save the following JSON code to a file named `start_request.json`.

   ```
   {
       "ApplicationName": "getting_started",
       "RunConfiguration": {
           "ApplicationRestoreConfiguration": { 
            "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
            }
       }
   }
   ```

1. Execute the `StartApplication` action with the preceding request to start the application:

   ```
   aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
   ```

The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.

## Stop the application
<a name="examples-s3sink-scala-stop"></a>

In this section, you use the [StopApplication](https://docs.amazonaws.cn/managed-flink/latest/apiv2/API_StopApplication.html) action to stop the application.

**To stop the application**

1. Save the following JSON code to a file named `stop_request.json`.

   ```
   {
      "ApplicationName": "s3_sink"
   }
   ```

1. Execute the `StopApplication` action with the preceding request to stop the application:

   ```
   aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
   ```

The application is now stopped.

## Add a CloudWatch logging option
<a name="examples-s3sink-scala-cw-option"></a>

You can use the Amazon CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see [Setting Up Application Logging](https://docs.amazonaws.cn/managed-flink/latest/java/cloudwatch-logs.html).

## Update environment properties
<a name="examples-s3sink-scala-update-environment-properties"></a>

In this section, you use the [UpdateApplication](https://docs.amazonaws.cn/managed-flink/latest/apiv2/API_UpdateApplication.html) action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.

**To update environment properties for the application**

1. Save the following JSON code to a file named `update_properties_request.json`.

   ```
   {
         "ApplicationName": "getting_started",
          "CurrentApplicationVersionId": 1,
          "ApplicationConfigurationUpdate": { 
           "EnvironmentPropertyUpdates": { 
              "PropertyGroups": [ 
               { 
                  "PropertyGroupId": "ConsumerConfigProperties",
                  "PropertyMap" : {
                       "aws.region" : "us-west-2",
                       "stream.name" : "ExampleInputStream",
                       "flink.stream.initpos" : "LATEST"
                  }
               },
               { 
                  "PropertyGroupId": "ProducerConfigProperties",
                  "PropertyMap" : {
                       "aws.region" : "us-west-2",
                       "stream.name" : "ExampleOutputStream"
                  }
               }
              ]
           } 
       }
   ```

1. Execute the `UpdateApplication` action with the preceding request to update environment properties:

   ```
   aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
   ```

## Update the application code
<a name="examples-s3sink-scala-update-app-code"></a>

When you need to update your application code with a new version of your code package, you use the [UpdateApplication](https://docs.amazonaws.cn/managed-flink/latest/apiv2/API_UpdateApplication.html) CLI action.

**Note**  
To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see [Enabling or Disabling Versioning](https://docs.amazonaws.cn/AmazonS3/latest/user-guide/enable-versioning.html).

To use the Amazon CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call `UpdateApplication`, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.

The following sample request for the `UpdateApplication` action reloads the application code and restarts the application. Update the `CurrentApplicationVersionId` to the current application version. You can check the current application version using the `ListApplications` or `DescribeApplication` actions. Update the bucket name suffix (<username>) with the suffix that you chose in the [Create dependent resources](examples-gs-scala.md#examples-gs-scala-resources) section.

```
{{
    "ApplicationName": "getting_started",
    "CurrentApplicationVersionId": 1,
    "ApplicationConfigurationUpdate": {
        "ApplicationCodeConfigurationUpdate": {
            "CodeContentUpdate": {
                "S3ContentLocationUpdate": {
                    "BucketARNUpdate": "arn:aws:s3:::ka-app-code-<username>",
                    "FileKeyUpdate": "getting-started-scala-1.0.jar",
                    "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU"
                }
            }
        }
    }
}
```

# Clean up Amazon resources
<a name="examples-gs-scala-cleanup"></a>

This section includes procedures for cleaning up Amazon resources created in the Tumbling Window tutorial.

**Topics**
+ [Delete your Managed Service for Apache Flink application](#examples-gs-scala-cleanup-app)
+ [Delete your Kinesis data streams](#examples-gs-scala-cleanup-stream)
+ [Delete your Amazon S3 object and bucket](#examples-gs-scala-cleanup-s3)
+ [Delete your IAM resources](#examples-gs-scala-cleanup-iam)
+ [Delete your CloudWatch resources](#examples-gs-scala-cleanup-cw)

## Delete your Managed Service for Apache Flink application
<a name="examples-gs-scala-cleanup-app"></a>

1. Sign in to the Amazon Web Services Management Console, and open the Amazon MSF console at https://console.aws.amazon.com/flink.

1. in the Managed Service for Apache Flink panel, choose **MyApplication**.

1. In the application's page, choose **Delete** and then confirm the deletion.

## Delete your Kinesis data streams
<a name="examples-gs-scala-cleanup-stream"></a>

1. Open the Kinesis console at [https://console.amazonaws.cn/kinesis](https://console.amazonaws.cn/kinesis).

1. In the Kinesis Data Streams panel, choose **ExampleInputStream**.

1. In the **ExampleInputStream** page, choose **Delete Kinesis Stream** and then confirm the deletion.

1. In the **Kinesis streams** page, choose the **ExampleOutputStream**, choose **Actions**, choose **Delete**, and then confirm the deletion.

## Delete your Amazon S3 object and bucket
<a name="examples-gs-scala-cleanup-s3"></a>

1. Open the Amazon S3 console at [https://console.amazonaws.cn/s3/](https://console.amazonaws.cn/s3/).

1. Choose the **ka-app-code-*<username>* bucket.**

1. Choose **Delete** and then enter the bucket name to confirm deletion.

## Delete your IAM resources
<a name="examples-gs-scala-cleanup-iam"></a>

1. Open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/).

1. In the navigation bar, choose **Policies**.

1. In the filter control, enter **kinesis**.

1. Choose the **kinesis-analytics-service-MyApplication-us-west-2** policy.

1. Choose **Policy Actions** and then choose **Delete**.

1. In the navigation bar, choose **Roles**.

1. Choose the **kinesis-analytics-MyApplication-us-west-2** role.

1. Choose **Delete role** and then confirm the deletion.

## Delete your CloudWatch resources
<a name="examples-gs-scala-cleanup-cw"></a>

1. Open the CloudWatch console at [https://console.amazonaws.cn/cloudwatch/](https://console.amazonaws.cn/cloudwatch/).

1. In the navigation bar, choose **Logs**.

1. Choose the **/aws/kinesis-analytics/MyApplication** log group.

1. Choose **Delete Log Group** and then confirm the deletion.