Lambda event filtering - Amazon Lambda
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Lambda event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For example, you can add a filter so that your function only processes Amazon SQS messages containing certain data parameters. Event filtering works with event source mappings. You can add filters to event source mappings for the following Amazon services:

  • Amazon DynamoDB

  • Amazon Kinesis Data Streams

  • Amazon MQ

  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)

  • Self-managed Apache Kafka

  • Amazon Simple Queue Service (Amazon SQS)

Lambda doesn't support event filtering for Amazon DocumentDB.

By default, you can define up to five different filters for a single event source mapping. Your filters are logically ORed together. If a record from your event source satisfies one or more of your filters, Lambda includes the record in the next event it sends to your function. If none of your filters are satisfied, Lambda discards the record.

Note

If you need to define more than five filters for an event source, you can request a quota increase up to 10 filters for each event source. If you attempt to add more filters than your current quota permits, Lambda will return an error when you try and create the event source.

Event filtering basics

A filter criteria (FilterCriteria) object is a structure that consists of a list of filters (Filters). Each filter is a structure that defines an event filtering pattern (Pattern). A pattern is a string representation of a JSON filter rule. The structure of a FilterCriteria object is as follows.

{ "Filters": [ { "Pattern": "{ \"Metadata1\": [ rule1 ], \"data\": { \"Data1\": [ rule2 ] }}" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "Metadata1": [ rule1 ], "data": { "Data1": [ rule2 ] } }

Your filter pattern can include metadata properties, data properties, or both. The available metadata parameters and the format of the data parameters vary according to the Amazon Web Service which is acting as the event source. For example, suppose your event source mapping receives the following record from an Amazon SQS queue:

{ "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...", "body": "{\n "City": "Seattle",\n "State": "WA",\n "Temperature": "46"\n}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2:123456789012:my-queue", "awsRegion": "us-east-2" }
  • Metadata properties are the fields containing information about the event that created the record. In the example Amazon SQS record, the metadata properties include fields such as messageID, eventSourceArn, and awsRegion.

  • Data properties are the fields of the record containing the data from your stream or queue. In the Amazon SQS event example, the key for the data field is body, and the data properties are the fields City State, and Temperature.

Different types of event source use different key values for their data fields. To filter on data properties, make sure that you use the correct key in your filter’s pattern. For a list of data filtering keys, and to see examples of filter patterns for each supported Amazon Web Service, refer to Using filters with different Amazon Web Services.

Event filtering can handle multi-level JSON filtering. For example, consider the following fragment of a record from a DynamoDB stream:

"dynamodb": { "Keys": { "ID": { "S": "ABCD" } "Number": { "N": "1234" }, ... }

Suppose you want to process only those records where the value of the sort key Number is 4567. In this case, your FilterCriteria object would look like this:

{ "Filters": [ { "Pattern": "{ \"dynamodb\": { \"Keys\": { \"Number\": { \"N\": [ "4567" ] } } } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "dynamodb": { "Keys": { "Number": { "N": [ "4567" ] } } } }

Handling records that don't meet filter criteria

The way in which records that don’t meet your filter are handled depends on the event source.

  • For Amazon SQS, if a message doesn't satisfy your filter criteria, Lambda automatically removes the message from the queue. You don't have to manually delete these messages in Amazon SQS.

  • For Kinesis and DynamoDB, once your filter criteria processes a record, the streams iterator advances past this record. If the record doesn't satisfy your filter criteria, you don't have to manually delete the record from your event source. After the retention period, Kinesis and DynamoDB automatically delete these old records. If you want records to be deleted sooner, see Changing the Data Retention Period.

  • For Amazon MSK, self-managed Apache Kafka, and Amazon MQ messages, Lambda drops messages that don't match all fields included in the filter. For self-managed Apache Kafka, Lambda commits offsets for matched and unmatched messages after successfully invoking the function. For Amazon MQ, Lambda acknowledges matched messages after successfully invoking the function and acknowledges unmatched messages when filtering them.

Filter rule syntax

For filter rules, Lambda supports the Amazon EventBridge rules and uses the same syntax as EventBridge. For more information, see Amazon EventBridge event patterns in the Amazon EventBridge User Guide.

The following is a summary of all the comparison operators available for Lambda event filtering.

Comparison operator Example Rule syntax

Null

UserID is null

"UserID": [ null ]

Empty

LastName is empty

"LastName": [""]

Equals

Name is "Alice"

"Name": [ "Alice" ]

Equals (ignore case)

Name is "Alice"

"Name": [ { "equals-ignore-case": "alice" } ]

And

Location is "New York" and Day is "Monday"

"Location": [ "New York" ], "Day": ["Monday"]

Or

PaymentType is "Credit" or "Debit"

"PaymentType": [ "Credit", "Debit"]

Or (multiple fields)

Location is "New York", or Day is "Monday".

"$or": [ { "Location": [ "New York" ] }, { "Day": [ "Monday" ] } ]

Not

Weather is anything but "Raining"

"Weather": [ { "anything-but": [ "Raining" ] } ]

Numeric (equals)

Price is 100

"Price": [ { "numeric": [ "=", 100 ] } ]

Numeric (range)

Price is more than 10, and less than or equal to 20

"Price": [ { "numeric": [ ">", 10, "<=", 20 ] } ]

Exists

ProductName exists

"ProductName": [ { "exists": true } ]

Does not exist

ProductName does not exist

"ProductName": [ { "exists": false } ]

Begins with

Region is in the US

"Region": [ {"prefix": "us-" } ]

Ends with

FileName ends with a .png extension.

"FileName": [ { "suffix": ".png" } ]

Note

Like EventBridge, for strings, Lambda uses exact character-by-character matching without case-folding or any other string normalization. For numbers, Lambda also uses string representation. For example, 300, 300.0, and 3.0e2 are not considered equal.

Attaching filter criteria to an event source mapping (console)

Follow these steps to create a new event source mapping with filter criteria using the Lambda console.

To create a new event source mapping with filter criteria (console)
  1. Open the Functions page of the Lambda console.

  2. Choose the name of a function to create an event source mapping for.

  3. Under Function overview, choose Add trigger.

  4. For Trigger configuration, choose a trigger type that supports event filtering. For a list of supported services, refer to the list at the beginning of this page.

  5. Expand Additional settings.

  6. Under Filter criteria, choose Add, and then define and enter your filters. For example, you can enter the following.

    { "Metadata" : [ 1, 2 ] }

    This instructs Lambda to process only the records where field Metadata is equal to 1 or 2. You can continue to select Add to add more filters up to the maximum allowed number.

  7. When you have finished adding your filters, choose Save.

When you enter filter criteria using the console, you enter only the filter pattern and don't need to provide the Pattern key or escape quotes. In step 6 of the preceding instructions, { "Metadata" : [ 1, 2 ] } corresponds to the following FilterCriteria.

{ "Filters": [ { "Pattern": "{ \"Metadata\" : [ 1, 2 ] }" } ] }

After creating your event source mapping in the console, you can see the formatted FilterCriteria in the trigger details. For more examples of creating event filters using the console, see Using filters with different Amazon Web Services.

Attaching filter criteria to an event source mapping (Amazon CLI)

Suppose you want an event source mapping to have the following FilterCriteria:

{ "Filters": [ { "Pattern": "{ \"Metadata\" : [ 1, 2 ] }" } ] }

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:sqs:us-east-2:123456789012:my-queue \ --filter-criteria '{"Filters": [{"Pattern": "{ \"Metadata\" : [ 1, 2 ]}"}]}'

This create-event-source-mapping command creates a new Amazon SQS event source mapping for function my-function with the specified FilterCriteria.

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"Metadata\" : [ 1, 2 ]}"}]}'

Note that to update an event source mapping, you need its UUID. You can get the UUID from a list-event-source-mappings call. Lambda also returns the UUID in the create-event-source-mapping CLI response.

To remove filter criteria from an event source, you can run the following update-event-source-mapping command with an empty FilterCriteria object.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria "{}"

For more examples of creating event filters using the Amazon CLI, see Using filters with different Amazon Web Services.

Attaching filter criteria to an event source mapping (Amazon SAM)

Suppose you want to configure an event source in Amazon SAM to use the following filter criteria:

{ "Filters": [ { "Pattern": "{ \"Metadata\" : [ 1, 2 ] }" } ] }

To add these filter criteria to your event source mapping, insert the following snippet into the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{"Metadata": [1, 2]}'

For more information on creating and configuring an Amazon SAM template for an event source mapping, see the EventSource section of the Amazon SAM Developer Guide. Fore more examples of creating event filters using Amazon SAM templates, see Using filters with different Amazon Web Services.

Using filters with different Amazon Web Services

Different types of event source use different key values for their data fields. To filter on data properties, make sure that you use the correct key in your filter’s pattern. The following table gives the filtering keys for each supported Amazon Web Service.

Amazon Web Service Filtering key
DynamoDB dynamodb
Kinesis data
Amazon MQ data
Amazon MSK value
Self-managed Apache Kafka value
Amazon SQS body

The following sections give examples of filter patterns for different types of event source. They also provide definitions of supported incoming data formats and filter pattern body formats for each supported service.

Filtering with DynamoDB

Suppose you have a DynamoDB table with the primary key CustomerName and attributes AccountManager and PaymentTerms. The following shows an example record from your DynamoDB table’s stream.

{ "eventID": "1", "eventVersion": "1.0", "dynamodb": { "ApproximateCreationDateTime": "1678831218.0", "Keys": { "CustomerName": { "S": "AnyCompany Industries" }, "NewImage": { "AccountManager": { "S": "Pat Candella" }, "PaymentTerms": { "S": "60 days" }, "CustomerName": { "S": "AnyCompany Industries" } }, "SequenceNumber": "111", "SizeBytes": 26, "StreamViewType": "NEW_IMAGE" } } }

To filter based on the key and attribute values in your DynamoDB table, use the dynamodb key in the record. Suppose you want your function to process only those records where the primary key CustomerName is “AnyCompany Industries.” The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "dynamodb": { "Keys": { "CustomerName": { "S": [ "AnyCompany Industries" ] } } } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "dynamodb" : { "Keys" : { "CustomerName" : { "S" : [ "AnyCompany Industries" ] } } } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "dynamodb" : { "Keys" : { "CustomerName" : { "S" : [ "AnyCompany Industries" ] } } } }'

With DynamoDB, you can also use the NewImage and OldImage keys to filter for attribute values. Suppose you want to filter records where the AccountManager attribute in the latest table image is “Pat Candella” or "Shirley Rodriguez." The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "dynamodb": { "NewImage": { "AccountManager": { "S": [ "Pat Candella", "Shirley Rodriguez" ] } } } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella", "Shirley Rodriguez" ] } } } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella", "Shirley Rodriguez" ] } } } }'

You can also create filters using Boolean AND expressions. These expressions can include both your table's key and attribute parameters. Suppose you want to filter records where the NewImage value of AccountManager is "Pat Candella" and the OldImage value is "Terry Whitlock". The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella" ] } } }, "dynamodb": { "OldImage": { "AccountManager": { "S": [ "Terry Whitlock" ] } } } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella" ] } } } , "dynamodb" : { "OldImage" : { "AccountManager" : { "S" : [ "Terry Whitlock" ] } } } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } } "}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } } "}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella" ] } } } , "dynamodb" : { "OldImage" : { "AccountManager" : { "S" : [ "Terry Whitlock" ] } } } }'
Note

DynamoDB event filtering doesn’t support the use of numeric operators (numeric equals and numeric range). Even if items in your table are stored as numbers, these parameters are converted to strings in the JSON record object.

To properly filter events from DynamoDB sources, both the data field and your filter criteria for the data field (dynamodb) must be in valid JSON format. If either field isn't in a valid JSON format, Lambda drops the message or throws an exception. The following table summarizes the specific behavior:

Incoming data format Filter pattern format for data properties Resulting action

Valid JSON

Valid JSON

Lambda filters based on your filter criteria.

Valid JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Non-JSON

Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.

Non-JSON

Valid JSON

Lambda drops the record.

Non-JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Non-JSON

Non-JSON

Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.

Filtering with Kinesis

Suppose a producer is putting JSON formatted data into your Kinesis data stream. An example record would look like the following, with the JSON data converted to a Base64 encoded string in the data field.

{ "kinesis": { "kinesisSchemaVersion": "1.0", "partitionKey": "1", "sequenceNumber": "49590338271490256608559692538361571095921575989136588898", "data": "eyJSZWNvcmROdW1iZXIiOiAiMDAwMSIsICJUaW1lU3RhbXAiOiAieXl5eS1tbS1kZFRoaDptbTpzcyIsICJSZXF1ZXN0Q29kZSI6ICJBQUFBIn0=", "approximateArrivalTimestamp": 1545084650.987 }, "eventSource": "aws:kinesis", "eventVersion": "1.0", "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898", "eventName": "aws:kinesis:record", "invokeIdentityArn": "arn:aws:iam::123456789012:role/lambda-role", "awsRegion": "us-east-2", "eventSourceARN": "arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream" }

As long as the data the producer puts into the stream is valid JSON, you can use event filtering to filter records using the data key. Suppose a producer is putting records into your Kinesis stream in the following JSON format.

{ "record": 12345, "order": { "type": "buy", "stock": "ANYCO", "quantity": 1000 } }

To filter only those records where the order type is “buy,” the FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "data": { "order": { "type": [ "buy" ] } } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "data" : { "order" : { "type" : [ "buy" ] } } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:kinesis:us-east-2:123456789012:stream/my-stream \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "data" : { "order" : { "type" : [ "buy" ] } } }'

To properly filter events from Kinesis sources, both the data field and your filter criteria for the data field must be in valid JSON format. If either field isn't in a valid JSON format, Lambda drops the message or throws an exception. The following table summarizes the specific behavior:

Incoming data format Filter pattern format for data properties Resulting action

Valid JSON

Valid JSON

Lambda filters based on your filter criteria.

Valid JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Non-JSON

Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.

Non-JSON

Valid JSON

Lambda drops the record.

Non-JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Non-JSON

Non-JSON

Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.

Filtering Kinesis aggregated records

With Kinesis, you can aggregate multiple records into a single Kinesis Data Streams record to increase your data throughput. Lambda can only apply filter criteria to aggregated records when you use Kinesis enhanced fan-out. Filtering aggregated records with standard Kinesis isn't supported. When using enhanced fan-out, you configure a Kinesis dedicated-throughput consumer to act as the trigger for your Lambda function. Lambda then filters the aggregated records and passes only those records that meet your filter criteria.

To learn more about Kinesis record aggregation, refer to the Aggregation section on the Kinesis Producer Library (KPL) Key Concepts page. To Learn more about using Lambda with Kinesis enhanced fan-out, see Increasing real-time stream processing performance with Amazon Kinesis Data Streams enhanced fan-out and Amazon Lambda on the Amazon compute blog.

Filtering with Amazon MQ

Suppose your Amazon MQ message queue contains messages either in valid JSON format or as plain strings. An example record would look like the following, with the data converted to a Base64 encoded string in the data field.

ActiveMQ
{ "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.us-west-2.amazonaws.com-37557-1234520418293-4:1:1:1:1", "messageType": "jms/text-message", "deliveryMode": 1, "replyTo": null, "type": null, "expiration": "60000", "priority": 1, "correlationId": "myJMSCoID", "redelivered": false, "destination": { "physicalName": "testQueue" }, "data":"QUJDOkFBQUE=", "timestamp": 1598827811958, "brokerInTime": 1598827811958, "brokerOutTime": 1598827811959, "properties": { "index": "1", "doAlarm": "false", "myCustomProperty": "value" } }
RabbitMQ
{ "basicProperties": { "contentType": "text/plain", "contentEncoding": null, "headers": { "header1": { "bytes": [ 118, 97, 108, 117, 101, 49 ] }, "header2": { "bytes": [ 118, 97, 108, 117, 101, 50 ] }, "numberInHeader": 10 }, "deliveryMode": 1, "priority": 34, "correlationId": null, "replyTo": null, "expiration": "60000", "messageId": null, "timestamp": "Jan 1, 1970, 12:33:41 AM", "type": null, "userId": "AIDACKCEVSQ6C2EXAMPLE", "appId": null, "clusterId": null, "bodySize": 80 }, "redelivered": false, "data": "eyJ0aW1lb3V0IjowLCJkYXRhIjoiQ1pybWYwR3c4T3Y0YnFMUXhENEUifQ==" }

For both Active MQ and Rabbit MQ brokers, you can use event filtering to filter records using the data key. Suppose your Amazon MQ queue contains messages in the following JSON format.

{ "timeout": 0, "IPAddress": "203.0.113.254" }

To filter only those records where the timeout field is greater than 0, the FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0] } } ] } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "data": { "timeout": [ { "numeric": [ ">", 0 ] } ] } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

to add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "data" : { "timeout" : [ { "numeric": [ ">", 0 ] } ] } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:mq:us-east-2:123456789012:broker:my-broker:b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0 ] } ] } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0 ] } ] } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "data" : { "timeout" : [ { "numeric": [ ">", 0 ] } ] } }'

With Amazon MQ, you can also filter records where the message is a plain string. Suppose you want to process only records where the message begins with "Result: ". The FilterCriteria object would look as follows.

{ "Filters": [ { "Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "data": [ { "prefix": "Result: " } ] }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "data" : [ { "prefix": "Result: " } ] }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:mq:us-east-2:123456789012:broker:my-broker:b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "data" : [ { "prefix": "Result " } ] }'

Amazon MQ messages must be UTF-8 encoded strings, either plain strings or in JSON format. That's because Lambda decodes Amazon MQ byte arrays into UTF-8 before applying filter criteria. If your messages use another encoding, such as UTF-16 or ASCII, or if the message format doesn't match the FilterCriteria format, Lambda processes metadata filters only. The following table summarizes the specific behavior:

Incoming message format Filter pattern format for message properties Resulting action

Plain string

Plain string

Lambda filters based on your filter criteria.

Plain string

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Plain string

Valid JSON

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Plain string

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Valid JSON

Lambda filters based on your filter criteria.

Non-UTF-8 encoded string

JSON, plain string, or no pattern

Lambda filters (on the other metadata properties only) based on your filter criteria.

Filtering with Amazon MSK and self-managed Apache Kafka

Suppose a producer is writing messages to a topic in your Amazon MSK or self-managed Apache Kafka cluster, either in valid JSON format or as plain strings. An example record would look like the following, with the message converted to a Base64 encoded string in the value field.

{ "mytopic-0":[ { "topic":"mytopic", "partition":0, "offset":15, "timestamp":1545084650987, "timestampType":"CREATE_TIME", "value":"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==", "headers":[] } ] }

Suppose your Apache Kafka producer is writing messages to your topic in the following JSON format.

{ "device_ID": "AB1234", "session":{ "start_time": "yyyy-mm-ddThh:mm:ss", "duration": 162 } }

You can use the value key to filter records. Suppose you wanted to filter only those records where device_ID begins with the letters AB. The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\": \"AB\" } ] } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "value": { "device_ID": [ { "prefix": "AB" } ] } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "value" : { "device_ID" : [ { "prefix": "AB" } ] } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:kafka:us-east-2:123456789012:cluster/my-cluster/b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \ --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\": \"AB\" } ] } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\": \"AB\" } ] } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "value" : { "device_ID" : [ { "prefix": "AB" } ] } }'

With Amazon MSK and self-managed Apache Kafka, you can also filter records where the message is a plain string. Suppose you want to ignore those messages where the string is "error". The FilterCriteria object would look as follows.

{ "Filters": [ { "Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "value": [ { "anything-but": [ "error" ] } ] }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "value" : [ { "anything-but": [ "error" ] } ] }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:kafka:us-east-2:123456789012:cluster/my-cluster/b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \ --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "value" : [ { "anything-but": [ "error" ] } ] }'

Amazon MSK and self-managed Apache Kafka messages must be UTF-8 encoded strings, either plain strings or in JSON format. That's because Lambda decodes Amazon MSK byte arrays into UTF-8 before applying filter criteria. If your messages use another encoding, such as UTF-16 or ASCII, or if the message format doesn't match the FilterCriteria format, Lambda processes metadata filters only. The following table summarizes the specific behavior:

Incoming message format Filter pattern format for message properties Resulting action

Plain string

Plain string

Lambda filters based on your filter criteria.

Plain string

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Plain string

Valid JSON

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Plain string

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Valid JSON

Lambda filters based on your filter criteria.

Non-UTF-8 encoded string

JSON, plain string, or no pattern

Lambda filters (on the other metadata properties only) based on your filter criteria.

Filtering with Amazon SQS

Suppose your Amazon SQS queue contains messages in the following JSON format.

{ "RecordNumber": 0000, "TimeStamp": "yyyy-mm-ddThh:mm:ss", "RequestCode": "AAAA" }

An example record for this queue would look as follows.

{ "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...", "body": "{\n "RecordNumber": 0000,\n "TimeStamp": "yyyy-mm-ddThh:mm:ss",\n "RequestCode": "AAAA"\n}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-west-2:123456789012:my-queue", "awsRegion": "us-west-2" }

To filter based on the contents of your Amazon SQS messages, use the body key in the Amazon SQS message record. Suppose you want to process only those records where the RequestCode in your Amazon SQS message is “BBBB.” The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "body": { "RequestCode": [ "BBBB" ] } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "body" : { "RequestCode" : [ "BBBB" ] } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:sqs:us-east-2:123456789012:my-queue \ --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "body" : { "RequestCode" : [ "BBBB" ] } }'

Suppose you want your function to process only those records where RecordNumber is greater than 9999. The FilterCriteria object would be as follows.

{ "Filters": [ { "Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }" } ] }

For added clarity, here is the value of the filter's Pattern expanded in plain JSON.

{ "body": { "RecordNumber": [ { "numeric": [ ">", 9999 ] } ] } }

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

Console

To add this filter using the console, follow the instructions in Attaching filter criteria to an event source mapping (console) and enter the following string for the Filter criteria.

{ "body" : { "RecordNumber" : [ { "numeric": [ ">", 9999 ] } ] } }
Amazon CLI

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

aws lambda create-event-source-mapping \ --function-name my-function \ --event-source-arn arn:aws:sqs:us-east-2:123456789012:my-queue \ --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }"}]}'

To add these filter criteria to an existing event source mapping, run the following command.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }"}]}'
Amazon SAM

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

FilterCriteria: Filters: - Pattern: '{ "body" : { "RecordNumber" : [ { "numeric": [ ">", 9999 ] } ] } }'

For Amazon SQS, the message body can be any string. However, this can be problematic if your FilterCriteria expect body to be in a valid JSON format. The reverse scenario is also true—if the incoming message body is in JSON format but your filter criteria expects body to be a plain string, this can lead to unintended behavior.

To avoid this issue, ensure that the format of body in your FilterCriteria matches the expected format of body in messages that you receive from your queue. Before filtering your messages, Lambda automatically evaluates the format of the incoming message body and of your filter pattern for body. If there is a mismatch, Lambda drops the message. The following table summarizes this evaluation:

Incoming message body format Filter pattern body format Resulting action

Plain string

Plain string

Lambda filters based on your filter criteria.

Plain string

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Plain string

Valid JSON

Lambda drops the message.

Valid JSON

Plain string

Lambda drops the message.

Valid JSON

No filter pattern for data properties

Lambda filters (on the other metadata properties only) based on your filter criteria.

Valid JSON

Valid JSON

Lambda filters based on your filter criteria.