Sending data to a Firehose stream - Amazon Data Firehose
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon Data Firehose was previously known as Amazon Kinesis Data Firehose

Sending data to a Firehose stream

This section describes how you can use different data sources to send data to your Firehose stream. If you are new to Amazon Data Firehose, take some time to become familiar with the concepts and terminology presented in What is Amazon Data Firehose?.

Note

Some Amazon services can only send messages and events to a Firehose stream that is in the same Region. If your Firehose stream doesn't appear as an option when you're configuring a target for Amazon CloudWatch Logs, CloudWatch Events, or Amazon IoT, verify that your Firehose stream is in the same Region as your other services.

You can send data to your Firehose stream from the following data sources.

Kinesis Data Streams

The following steps help you configure Amazon Kinesis Data Streams to send information to a Firehose stream.

Important

If you use the Kinesis Producer Library (KPL) to write data to a Kinesis data stream, you can use aggregation to combine the records that you write to that Kinesis data stream. If you then use that data stream as a source for your Firehose stream, Amazon Data Firehose de-aggregates the records before it delivers them to the destination. If you configure your Firehose stream to transform the data, Amazon Data Firehose de-aggregates the records before it delivers them to Amazon Lambda. For more information, see Developing Amazon Kinesis Data Streams Producers Using the Kinesis Producer Library and Aggregation.

  1. Sign in to the Amazon Web Services Management Console and open the Amazon Data Firehose console at https://console.amazonaws.cn/firehose/.

  2. Choose Create Firehose stream.

  3. On the Name and source page, provide values for the following fields.

    Firehose stream name

    The name of your Firehose stream.

    Source

    Choose Kinesis stream to configure a Firehose stream that uses a Kinesis data stream as a data source. You can then use Amazon Data Firehose to read data easily from an existing data stream and load it into destinations.

    To use a Kinesis data stream as a source, choose an existing stream in the Kinesis stream list, or choose Create new to create a new Kinesis data stream. After you create a new stream, choose Refresh to update the Kinesis stream list. If you have a large number of streams, filter the list using Filter by name.

    Note

    When you configure a Kinesis data stream as the source of a Firehose stream, the Amazon Data Firehose PutRecord and PutRecordBatch operations are disabled. To add data to your Firehose stream in this case, use the Kinesis Data Streams PutRecord and PutRecords operations.

    Amazon Data Firehose starts reading data from the LATEST position of your Kinesis stream. For more information about Kinesis Data Streams positions, see GetShardIterator.

    Amazon Data Firehose calls the Kinesis Data Streams GetRecords operation once per second for each shard. However, when full backup is enabled, Firehose calls the Kinesis Data Streams GetRecords operation twice per second for each shard, one for primary delivery destination and another for full backup.

    More than one Firehose stream can read from the same Kinesis stream. Other Kinesis applications (consumers) can also read from the same stream. Each call from any Firehose stream or other consumer application counts against the overall throttling limit for the shard. To avoid getting throttled, plan your applications carefully. For more information about Kinesis Data Streams limits, see Amazon Kinesis Streams Limits.

  4. Choose Next to advance to the Configure record transformation and format conversion page.

Amazon MSK

You can configure Amazon MSK to send information to a Firehose stream.

  1. Sign in to the Amazon Web Services Management Console and open the Amazon Data Firehose console at https://console.amazonaws.cn/firehose/.

  2. Choose Create Firehose stream.

    In the Choose source and destination section of the page, provide values for the following fields:

    Source

    Choose Amazon MSK to configure a Firehose stream that uses Amazon MSK as a data source. You can choose between MSK provisioned and MSK-Serverless clusters. You can then use Amazon Data Firehose to read data easily from a specific Amazon MSK cluster and topic and load it into the specified S3 destination.

    Destination

    Choose Amazon S3 as the destination for your Firehose stream.

    In the Source settings section of the page, provide values for the following fields:

    Amazon MSK cluster connectivity

    Choose either the Private bootstrap brokers (recommended) or Public bootstrap brokers option based on your cluster configuration. Bootstrap brokers is what Apache Kafka client uses as a starting point to connect to the cluster. Public bootstrap brokers are intended for public access from outside of Amazon, while private bootstrap brokers are intended for access from within Amazon. For more information about Amazon MSK, see Amazon Managed Streaming for Apache Kafka.

    To connect to a provisioned or serverless Amazon MSK cluster through private bootstrap brokers, the cluster must meet all of the following requirements.

    • The cluster must be active.

    • The cluster must have IAM as one of its access control methods.

    • Multi-VPC private connectivity must be enabled for the IAM access control method.

    • You must add to this cluster a resource-based policy which grants Amazon Data Firehose service principal the permission to invoke the Amazon MSK CreateVpcConnection API.

    To connect to a provisioned Amazon MSK cluster through public bootstrap brokers, the cluster must meet all of the following requirements.

    • The cluster must be active.

    • The cluster must have IAM as one of its access control methods.

    • The cluster must be public-accessible.

    Amazon MSK cluster

    For the same account scenario, specify the ARN of the Amazon MSK cluster from where your Firehose stream will read data.

    For a cross-account scenario, see Cross-account delivery from Amazon MSK.

    Topic

    Specify the Apache Kafka topic from which you want your Firehose stream to ingest data. Once the Firehose stream is created, you cannot update this topic.

    In the Firehose stream name section of the page, provide values for the following fields:

    Firehose stream name

    Specify the name for your Firehose stream.

  3. Next, you can complete the optional step of configuring record transformation and record format conversion. For more information, see Configure record transformation and format conversion.

Kinesis Agent

Amazon Kinesis agent is a standalone Java software application that serves as a reference implementation to show how you can collect and send data to Firehose. The agent continuously monitors a set of files and sends new data to your Firehose stream. The agent shows how you can handle file rotation, checkpointing, and retry upon failures. It shows how you can deliver your data in a reliable, timely, and simple manner. It also shows how you can emit CloudWatch metrics to better monitor and troubleshoot the streaming process. To learn more, awslabs/amazon-kinesis-agent.

By default, records are parsed from each file based on the newline ('\n') character. However, the agent can also be configured to parse multi-line records (see Agent configuration settings).

You can install the agent on Linux-based server environments such as web servers, log servers, and database servers. After installing the agent, configure it by specifying the files to monitor and the Firehose stream for the data. After the agent is configured, it durably collects data from the files and reliably sends it to the Firehose stream.

Topics

    Manage your Amazon credentials using one of the following methods:

    • Create a custom credentials provider. For details, see Create custom credential providers.

    • Specify an IAM role when you launch your EC2 instance.

    • Specify Amazon credentials when you configure the agent (see the entries for awsAccessKeyId and awsSecretAccessKey in the configuration table under Agent configuration settings).

    • Edit /etc/sysconfig/aws-kinesis-agent to specify your Amazon Region and Amazon access keys.

    • If your EC2 instance is in a different Amazon account, create an IAM role to provide access to the Amazon Data Firehose service. Specify that role when you configure the agent (see assumeRoleARN and assumeRoleExternalId). Use one of the previous methods to specify the Amazon credentials of a user in the other account who has permission to assume this role.

    You can create a custom credentials provider and give its class name and jar path to the Kinesis agent in the following configuration settings: userDefinedCredentialsProvider.classname and userDefinedCredentialsProvider.location. For the descriptions of these two configuration settings, see Agent configuration settings.

    To create a custom credentials provider, define a class that implements the AmazonCredentialsProvider interface, like the one in the following example.

    import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.AWSCredentialsProvider; import com.amazonaws.auth.BasicAWSCredentials; public class YourClassName implements AWSCredentialsProvider { public YourClassName() { } public AWSCredentials getCredentials() { return new BasicAWSCredentials("key1", "key2"); } public void refresh() { } }

    Your class must have a constructor that takes no arguments.

    Amazon invokes the refresh method periodically to get updated credentials. If you want your credentials provider to provide different credentials throughout its lifetime, include code to refresh the credentials in this method. Alternatively, you can leave this method empty if you want a credentials provider that vends static (non-changing) credentials.

    First, connect to your instance. For more information, see Connect to Your Instance in the Amazon EC2 User Guide. If you have trouble connecting, see Troubleshooting Connecting to Your Instance in the Amazon EC2 User Guide.

    Next, install the agent using one of the following methods.

    • To set up the agent from the Amazon Linux repositories

      This method works only for Amazon Linux instances. Use the following command:

      sudo yum install –y aws-kinesis-agent

      Agent v 2.0.0 or later is installed on computers with operating system Amazon Linux 2 (AL2). This agent version requires Java 1.8 or later. If required Java version is not yet present, the agent installation process installs it. For more information regarding Amazon Linux 2 see https://aws.amazon.com/amazon-linux-2/.

    • To set up the agent from the Amazon S3 repository

      This method works for Red Hat Enterprise Linux, as well as Amazon Linux 2 instances because it installs the agent from the publicly available repository. Use the following command to download and install the latest version of the agent version 2.x.x:

      sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-latest.amzn2.noarch.rpm

      To install a specific version of the agent, specify the version number in the command. For example, the following command installs agent v 2.0.1.

      sudo yum install –y https://streaming-data-agent.s3.amazonaws.com/aws-kinesis-agent-2.0.1-1.amzn1.noarch.rpm

      If you have Java 1.7 and you don’t want to upgrade it, you can download agent version 1.x.x, which is compatible with Java 1.7. For example, to download agent v1.1.6, you can use the following command:

      sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-1.1.6-1.amzn1.noarch.rpm

      The latest agent v1.x.x can be downloaded using the following command:

      sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-latest.amzn1.noarch.rpm
    • To set up the agent from the GitHub repo

      1. First, make sure that you have required Java version installed, depending on agent version.

      2. Download the agent from the awslabs/amazon-kinesis-agent GitHub repo.

      3. Install the agent by navigating to the download directory and running the following command:

        sudo ./setup --install
    • To set up the agent in a Docker container

      Kinesis Agent can be run in a container as well via the amazonlinux container base. Use the following Dockerfile and then run docker build.

      FROM amazonlinux RUN yum install -y aws-kinesis-agent which findutils COPY agent.json /etc/aws-kinesis/agent.json CMD ["start-aws-kinesis-agent"]
    To configure and start the agent
    1. Open and edit the configuration file (as superuser if using default file access permissions): /etc/aws-kinesis/agent.json

      In this configuration file, specify the files ( "filePattern" ) from which the agent collects data, and the name of the Firehose stream ( "deliveryStream" ) to which the agent sends data. The file name is a pattern, and the agent recognizes file rotations. You can rotate files or create new files no more than once per second. The agent uses the file creation time stamp to determine which files to track and tail into your Firehose stream. Creating new files or rotating files more frequently than once per second does not allow the agent to differentiate properly between them.

      { "flows": [ { "filePattern": "/tmp/app.log*", "deliveryStream": "yourdeliverystream" } ] }

      The default Amazon Region is us-east-1. If you are using a different Region, add the firehose.endpoint setting to the configuration file, specifying the endpoint for your Region. For more information, see Agent configuration settings.

    2. Start the agent manually:

      sudo service aws-kinesis-agent start
    3. (Optional) Configure the agent to start on system startup:

      sudo chkconfig aws-kinesis-agent on

    The agent is now running as a system service in the background. It continuously monitors the specified files and sends data to the specified Firehose stream. Agent activity is logged in /var/log/aws-kinesis-agent/aws-kinesis-agent.log.

    The agent supports two mandatory configuration settings, filePattern and deliveryStream, plus optional configuration settings for additional features. You can specify both mandatory and optional configuration settings in /etc/aws-kinesis/agent.json.

    Whenever you change the configuration file, you must stop and start the agent, using the following commands:

    sudo service aws-kinesis-agent stop sudo service aws-kinesis-agent start

    Alternatively, you could use the following command:

    sudo service aws-kinesis-agent restart

    The following are the general configuration settings.

    Configuration Setting Description
    assumeRoleARN

    The Amazon Resource Name (ARN) of the role to be assumed by the user. For more information, see Delegate Access Across Amazon Accounts Using IAM Roles in the IAM User Guide.

    assumeRoleExternalId

    An optional identifier that determines who can assume the role. For more information, see How to Use an External ID in the IAM User Guide.

    awsAccessKeyId

    Amazon access key ID that overrides the default credentials. This setting takes precedence over all other credential providers.

    awsSecretAccessKey

    Amazon secret key that overrides the default credentials. This setting takes precedence over all other credential providers.

    cloudwatch.emitMetrics

    Enables the agent to emit metrics to CloudWatch if set (true).

    Default: true

    cloudwatch.endpoint

    The regional endpoint for CloudWatch.

    Default: monitoring.us-east-1.amazonaws.com

    firehose.endpoint

    The regional endpoint for Amazon Data Firehose.

    Default: firehose.us-east-1.amazonaws.com

    sts.endpoint

    The regional endpoint for the Amazon Security Token Service.

    Default: https://sts.amazonaws.com

    userDefinedCredentialsProvider.classname If you define a custom credentials provider, provide its fully-qualified class name using this setting. Don't include .class at the end of the class name.
    userDefinedCredentialsProvider.location If you define a custom credentials provider, use this setting to specify the absolute path of the jar that contains the custom credentials provider. The agent also looks for the jar file in the following location: /usr/share/aws-kinesis-agent/lib/.

    The following are the flow configuration settings.

    Configuration Setting Description
    aggregatedRecordSizeBytes

    To make the agent aggregate records and then put them to the Firehose stream in one operation, specify this setting. Set it to the size that you want the aggregate record to have before the agent puts it to the Firehose stream.

    Default: 0 (no aggregation)

    dataProcessingOptions

    The list of processing options applied to each parsed record before it is sent to the Firehose stream. The processing options are performed in the specified order. For more information, see Use the Agent to pre-process data.

    deliveryStream

    [Required] The name of the Firehose stream.

    filePattern

    [Required] A glob for the files that need to be monitored by the agent. Any file that matches this pattern is picked up by the agent automatically and monitored. For all files matching this pattern, grant read permission to aws-kinesis-agent-user. For the directory containing the files, grant read and execute permissions to aws-kinesis-agent-user.

    Important

    The agent picks up any file that matches this pattern. To ensure that the agent doesn't pick up unintended records, choose this pattern carefully.

    initialPosition

    The initial position from which the file started to be parsed. Valid values are START_OF_FILE and END_OF_FILE.

    Default: END_OF_FILE

    maxBufferAgeMillis

    The maximum time, in milliseconds, for which the agent buffers data before sending it to the Firehose stream.

    Value range: 1,000–900,000 (1 second to 15 minutes)

    Default: 60,000 (1 minute)

    maxBufferSizeBytes

    The maximum size, in bytes, for which the agent buffers data before sending it to the Firehose stream.

    Value range: 1–4,194,304 (4 MB)

    Default: 4,194,304 (4 MB)

    maxBufferSizeRecords

    The maximum number of records for which the agent buffers data before sending it to the Firehose stream.

    Value range: 1–500

    Default: 500

    minTimeBetweenFilePollsMillis

    The time interval, in milliseconds, at which the agent polls and parses the monitored files for new data.

    Value range: 1 or more

    Default: 100

    multiLineStartPattern

    The pattern for identifying the start of a record. A record is made of a line that matches the pattern and any following lines that don't match the pattern. The valid values are regular expressions. By default, each new line in the log files is parsed as one record.

    skipHeaderLines

    The number of lines for the agent to skip parsing at the beginning of monitored files.

    Value range: 0 or more

    Default: 0 (zero)

    truncatedRecordTerminator

    The string that the agent uses to truncate a parsed record when the record size exceeds the Amazon Data Firehose record size limit. (1,000 KB)

    Default: '\n' (newline)

    By specifying multiple flow configuration settings, you can configure the agent to monitor multiple file directories and send data to multiple streams. In the following configuration example, the agent monitors two file directories and sends data to a Kinesis data stream and a Firehose stream respectively. You can specify different endpoints for Kinesis Data Streams and Amazon Data Firehose so that your data stream and Firehose stream don’t need to be in the same Region.

    { "cloudwatch.emitMetrics": true, "kinesis.endpoint": "https://your/kinesis/endpoint", "firehose.endpoint": "https://your/firehose/endpoint", "flows": [ { "filePattern": "/tmp/app1.log*", "kinesisStream": "yourkinesisstream" }, { "filePattern": "/tmp/app2.log*", "deliveryStream": "yourfirehosedeliverystream" } ] }

    For more detailed information about using the agent with Amazon Kinesis Data Streams, see Writing to Amazon Kinesis Data Streams with Kinesis Agent.

    The agent can pre-process the records parsed from monitored files before sending them to your Firehose stream. You can enable this feature by adding the dataProcessingOptions configuration setting to your file flow. One or more processing options can be added, and they are performed in the specified order.

    The agent supports the following processing options. Because the agent is open source, you can further develop and extend its processing options. You can download the agent from Kinesis Agent.

    Processing Options
    SINGLELINE

    Converts a multi-line record to a single-line record by removing newline characters, leading spaces, and trailing spaces.

    { "optionName": "SINGLELINE" }
    CSVTOJSON

    Converts a record from delimiter-separated format to JSON format.

    { "optionName": "CSVTOJSON", "customFieldNames": [ "field1", "field2", ... ], "delimiter": "yourdelimiter" }
    customFieldNames

    [Required] The field names used as keys in each JSON key value pair. For example, if you specify ["f1", "f2"], the record "v1, v2" is converted to {"f1":"v1","f2":"v2"}.

    delimiter

    The string used as the delimiter in the record. The default is a comma (,).

    LOGTOJSON

    Converts a record from a log format to JSON format. The supported log formats are Apache Common Log, Apache Combined Log, Apache Error Log, and RFC3164 Syslog.

    { "optionName": "LOGTOJSON", "logFormat": "logformat", "matchPattern": "yourregexpattern", "customFieldNames": [ "field1", "field2", ] }
    logFormat

    [Required] The log entry format. The following are possible values:

    • COMMONAPACHELOG — The Apache Common Log format. Each log entry has the following pattern by default: "%{host} %{ident} %{authuser} [%{datetime}] \"%{request}\" %{response} %{bytes}".

    • COMBINEDAPACHELOG — The Apache Combined Log format. Each log entry has the following pattern by default: "%{host} %{ident} %{authuser} [%{datetime}] \"%{request}\" %{response} %{bytes} %{referrer} %{agent}".

    • APACHEERRORLOG — The Apache Error Log format. Each log entry has the following pattern by default: "[%{timestamp}] [%{module}:%{severity}] [pid %{processid}:tid %{threadid}] [client: %{client}] %{message}".

    • SYSLOG — The RFC3164 Syslog format. Each log entry has the following pattern by default: "%{timestamp} %{hostname} %{program}[%{processid}]: %{message}".

    matchPattern

    Overrides the default pattern for the specified log format. Use this setting to extract values from log entries if they use a custom format. If you specify matchPattern, you must also specify customFieldNames.

    customFieldNames

    The custom field names used as keys in each JSON key value pair. You can use this setting to define field names for values extracted from matchPattern, or override the default field names of predefined log formats.

    Example : LOGTOJSON Configuration

    Here is one example of a LOGTOJSON configuration for an Apache Common Log entry converted to JSON format:

    { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG" }

    Before conversion:

    64.242.88.10 - - [07/Mar/2004:16:10:02 -0800] "GET /mailman/listinfo/hsdivision HTTP/1.1" 200 6291

    After conversion:

    {"host":"64.242.88.10","ident":null,"authuser":null,"datetime":"07/Mar/2004:16:10:02 -0800","request":"GET /mailman/listinfo/hsdivision HTTP/1.1","response":"200","bytes":"6291"}
    Example : LOGTOJSON Configuration With Custom Fields

    Here is another example LOGTOJSON configuration:

    { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG", "customFieldNames": ["f1", "f2", "f3", "f4", "f5", "f6", "f7"] }

    With this configuration setting, the same Apache Common Log entry from the previous example is converted to JSON format as follows:

    {"f1":"64.242.88.10","f2":null,"f3":null,"f4":"07/Mar/2004:16:10:02 -0800","f5":"GET /mailman/listinfo/hsdivision HTTP/1.1","f6":"200","f7":"6291"}
    Example : Convert Apache Common Log Entry

    The following flow configuration converts an Apache Common Log entry to a single-line record in JSON format:

    { "flows": [ { "filePattern": "/tmp/app.log*", "deliveryStream": "my-delivery-stream", "dataProcessingOptions": [ { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG" } ] } ] }
    Example : Convert Multi-Line Records

    The following flow configuration parses multi-line records whose first line starts with "[SEQUENCE=". Each record is first converted to a single-line record. Then, values are extracted from the record based on a tab delimiter. Extracted values are mapped to specified customFieldNames values to form a single-line record in JSON format.

    { "flows": [ { "filePattern": "/tmp/app.log*", "deliveryStream": "my-delivery-stream", "multiLineStartPattern": "\\[SEQUENCE=", "dataProcessingOptions": [ { "optionName": "SINGLELINE" }, { "optionName": "CSVTOJSON", "customFieldNames": [ "field1", "field2", "field3" ], "delimiter": "\\t" } ] } ] }
    Example : LOGTOJSON Configuration with Match Pattern

    Here is one example of a LOGTOJSON configuration for an Apache Common Log entry converted to JSON format, with the last field (bytes) omitted:

    { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG", "matchPattern": "^([\\d.]+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\\d{3})", "customFieldNames": ["host", "ident", "authuser", "datetime", "request", "response"] }

    Before conversion:

    123.45.67.89 - - [27/Oct/2000:09:27:09 -0400] "GET /java/javaResources.html HTTP/1.0" 200

    After conversion:

    {"host":"123.45.67.89","ident":null,"authuser":null,"datetime":"27/Oct/2000:09:27:09 -0400","request":"GET /java/javaResources.html HTTP/1.0","response":"200"}

    Automatically start the agent on system start up.

    sudo chkconfig aws-kinesis-agent on

    Check the status of the agent:

    sudo service aws-kinesis-agent status

    Stop the agent:

    sudo service aws-kinesis-agent stop

    Read the agent's log file from this location:

    /var/log/aws-kinesis-agent/aws-kinesis-agent.log

    Uninstall the agent:

    sudo yum remove aws-kinesis-agent

    Is there a Kinesis Agent for Windows?

    Kinesis Agent for Windows is different software than Kinesis Agent for Linux platforms.

    Why is Kinesis Agent slowing down and/or RecordSendErrors increasing?

    This is usually due to throttling from Kinesis. Check the WriteProvisionedThroughputExceeded metric for Kinesis Data Streams or the ThrottledRecords metric for Firehose streams. Any increase from 0 in these metrics indicates that the stream limits need to be increased. For more information, see Kinesis Data Stream limits and Firehose streams.

    Once you rule out throttling, see if the Kinesis Agent is configured to tail a large amount of small files. There is a delay when Kinesis Agent tails a new file, so Kinesis Agent should be tailing a small amount of larger files. Try consolidating your log files into larger files.

    Why am I getting java.lang.OutOfMemoryError exceptions?

    Kinesis Agent does not have enough memory to handle its current workload. Try increasing JAVA_START_HEAP and JAVA_MAX_HEAP in /usr/bin/start-aws-kinesis-agent and restarting the agent.

    Why am I getting IllegalStateException : connection pool shut down exceptions?

    Kinesis Agent does not have enough connections to handle its current workload. Try increasing maxConnections and maxSendingThreads in your general agent configuration settings at /etc/aws-kinesis/agent.json. The default value for these fields is 12 times the runtime processors available. See AgentConfiguration.java for more about advanced agent configurations settings.

    How can I debug another issue with Kinesis Agent?

    DEBUG level logs can be enabled in /etc/aws-kinesis/log4j.xml .

    How should I configure Kinesis Agent?

    The smaller the maxBufferSizeBytes, the more frequently Kinesis Agent will send data. This can be good as it decreases delivery time of records, but it also increases the requests per second to Kinesis.

    Why is Kinesis Agent sending duplicate records?

    This occurs due to a misconfiguration in file tailing. Make sure that each fileFlow’s filePattern is only matching one file. This can also occur if the logrotate mode being used is in copytruncate mode. Try changing the mode to the default or create mode to avoid duplication. For more information on handling duplicate records, see Handling Duplicate Records.

    Amazon SDK

    You can use the Amazon Data Firehose API to send data to a Firehose stream using the Amazon SDK for Java, .NET, Node.js, Python, or Ruby. If you are new to Amazon Data Firehose, take some time to become familiar with the concepts and terminology presented in What is Amazon Data Firehose?. For more information, see Start Developing with Amazon Web Services.

    These examples do not represent production-ready code, in that they do not check for all possible exceptions, or account for all possible security or performance considerations.

    The Amazon Data Firehose API offers two operations for sending data to your Firehose stream: PutRecord and PutRecordBatch. PutRecord() sends one data record within one call and PutRecordBatch() can send multiple data records within one call.

    Single write operations using PutRecord

    Putting data requires only the Firehose stream name and a byte buffer (<=1000 KB). Because Amazon Data Firehose batches multiple records before loading the file into Amazon S3, you may want to add a record separator. To put data one record at a time into a Firehose stream, use the following code:

    PutRecordRequest putRecordRequest = new PutRecordRequest(); putRecordRequest.setDeliveryStreamName(deliveryStreamName); String data = line + "\n"; Record record = new Record().withData(ByteBuffer.wrap(data.getBytes())); putRecordRequest.setRecord(record); // Put record into the DeliveryStream firehoseClient.putRecord(putRecordRequest);

    For more code context, see the sample code included in the Amazon SDK. For information about request and response syntax, see the relevant topic in Firehose API Operations.

    Batch write operations using PutRecordBatch

    Putting data requires only the Firehose stream name and a list of records. Because Amazon Data Firehose batches multiple records before loading the file into Amazon S3, you may want to add a record separator. To put data records in batches into a Firehose stream, use the following code:

    PutRecordBatchRequest putRecordBatchRequest = new PutRecordBatchRequest(); putRecordBatchRequest.setDeliveryStreamName(deliveryStreamName); putRecordBatchRequest.setRecords(recordList); // Put Record Batch records. Max No.Of Records we can put in a // single put record batch request is 500 firehoseClient.putRecordBatch(putRecordBatchRequest); recordList.clear();

    For more code context, see the sample code included in the Amazon SDK. For information about request and response syntax, see the relevant topic in Firehose API Operations.

    CloudWatch Logs

    CloudWatch Logs events can be sent to Firehose using CloudWatch subscription filters. For more information, see Subscription filters with Amazon Data Firehose.

    CloudWatch Logs events are sent to Firehose in compressed gzip format. If you want to deliver decompressed log events to Firehose destinations, you can use the decompression feature in Firehose to automatically decompress CloudWatch Logs.

    Important

    Currently, Firehose does not support the delivery of CloudWatch Logs to Amazon OpenSearch Service destination because Amazon CloudWatch combines multiple log events into one Firehose record and Amazon OpenSearch Service cannot accept multiple log events in one record. As an alternative, you can consider Using subscription filter for Amazon OpenSearch Service in CloudWatch Logs.

    Decompression of CloudWatch Logs

    If you are using Firehose to deliver CloudWatch Logs and want to deliver decompressed data to your Firehose stream destination, use Firehose Data Format Conversion (Parquet, ORC) or Dynamic partitioning. You must enable decompression for your Firehose stream.

    You can enable decompression using the Amazon Web Services Management Console, Amazon Command Line Interface or Amazon SDKs.

    Note

    If you enable the decompression feature on a stream, use that stream exclusively for CloudWatch Logs subscriptions filters, and not for Vended Logs. If you enable the decompression feature on a stream that is used to ingest both CloudWatch Logs and Vended Logs, the Vended Logs ingestion to Firehose fails. This decompression feature is only for CloudWatch Logs.

    Message extraction after decompression of CloudWatch Logs

    When you enable decompression, you have the option to also enable message extraction. When using message extraction, Firehose filters out all metadata, such as owner, loggroup, logstream, and others from the decompressed CloudWatch Logs records and delivers only the content inside the message fields. If you are delivering data to a Splunk destination, you must turn on message extraction for Splunk to parse the data. Following are sample outputs after decompression with and without message extraction.

    Fig 1: Sample output after decompression without message extraction:

    { "owner": "111111111111", "logGroup": "CloudTrail/logs", "logStream": "111111111111_CloudTrail/logs_us-east-1", "subscriptionFilters": [ "Destination" ], "messageType": "DATA_MESSAGE", "logEvents": [ { "id": "31953106606966983378809025079804211143289615424298221568", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root1\"}" }, { "id": "31953106606966983378809025079804211143289615424298221569", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root2\"}" }, { "id": "31953106606966983378809025079804211143289615424298221570", "timestamp": 1432826855000, "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root3\"}" } ] }

    Fig 2: Sample output after decompression with message extraction:

    {"eventVersion":"1.03","userIdentity":{"type":"Root1"} {"eventVersion":"1.03","userIdentity":{"type":"Root2"} {"eventVersion":"1.03","userIdentity":{"type":"Root3"}

    Enabling and disabling decompression

    You can enable and disable decompression using the Amazon Web Services Management Console, Amazon Command Line Interface or Amazon SDKs.

    Enabling decompression on a new Firehose stream from console

    To enable decompression on a new Firehose stream using the Amazon Web Services Management Console
    1. Sign in to the Amazon Web Services Management Console and open the Kinesis console at https://console.amazonaws.cn/kinesis.

    2. Choose Amazon Data Firehose in the navigation pane.

    3. Choose Create Firehose stream.

    4. Under Choose source and destination

      Source

      The source of your Firehose stream. Choose one of the following sources:

      • Direct PUT – Choose this option to create a Firehose stream that producer applications write to directly. For a list of Amazon services and agents and open source services that are integrated with Direct PUT in Firehose, see this section.

      • Kinesis stream: Choose this option to configure a Firehose stream that uses a Kinesis data stream as a data source. You can then use Firehose to read data easily from an existing Kinesis data stream and load it into destinations. For more information, see Writing to Firehose Using Kinesis Data Streams

      Destination

      The destination of your Firehose stream. Choose one of the following:

      • Amazon S3

      • Splunk

    5. Under Firehose stream name, enter a name for your stream.

    6. (Optional) Under Transform records:

      • In the Decompress source records from Amazon CloudWatch Logs section, choose Turn on decompression.

      • If you want to use message extraction after decompression, choose Turn on message extraction.

    Enabling decompression on an existing Firehose stream from console

    If you have a Firehose stream with a Lambda function to perform decompression, you can replace it with the Firehose decompression feature. Before you proceed, review your Lambda function code to confirm that it only performs decompression or message extraction. The output of your Lambda function should look similar to the examples shown in Fig 1 or Fig 2 in the previous section. If the output looks similar, you can replace the Lambda function using the following steps.

    1. Replace your current Lambda function with this blueprint. The new blueprint Lambda function automatically detects whether the incoming data is compressed or decompressed. It only performs decompression if its input data is compressed.

    2. Turn on decompression using the built-in Firehose option for decompression.

    3. Enable CloudWatch metrics for your Firehose stream if it's not already enabled. Monitor the metric CloudWatchProcessorLambda_IncomingCompressedData and wait until this metric changes to zero. This confirms that all input data sent to your Lambda function is decompressed and the Lambda function is no longer required.

    4. Remove the Lambda data transformation because you no longer need it to decompress your stream.

    Disabling decompression from console

    To disable decompression on a data stream using the Amazon Web Services Management Console

    1. Sign in to the Amazon Web Services Management Console and open the Kinesis console at https://console.amazonaws.cn/kinesis.

    2. Choose Amazon Data Firehose in the navigation pane.

    3. Choose the Firehose stream you wish to edit.

    4. On Firehose stream details page, choose the Configuration tab.

    5. In the Transform and convert records section, choose Edit.

    6. Under Decompress source records from Amazon CloudWatch Logs, clear Turn on decompression and then choose Save changes.

    FAQ

    What happens to the source data in case of an error during decompression?

    If Amazon Data Firehose is not able to decompress the record, the record is delivered as is (in compressed format) to error S3 bucket you specified during Firehose stream creation time. Along with the record, the delivered object also includes error code and error message and these objects will be delivered to an S3 bucket prefix called decompression-failed. Firehose will continue to process other records after a failed decompression of a record.

    What happens to the source data in case of an error in the processing pipeline after successful decompression?

    If Amazon Data Firehose errors out in the processing steps after decompression like Dynamic Partitioning and Data Format Conversion, the record is delivered in compressed format to the error S3 bucket you specified during Firehose stream creation time. Along with the record, the delivered object also includes error code and error message.

    How are you informed in case of an error or an exception?

    In case of an error or an exception during decompression, if you configure CloudWatch Logs, Firehose will log error messages into CloudWatch Logs. Additionally, Firehose sends metrics to CloudWatch metrics that you can monitor. You can also optionally create alarms based on metrics emitted by Firehose.

    What happens when put operations don't come from CloudWatch Logs?

    When customer puts do not come from CloudWatch Logs, then the following error message is returned:

    Put to Firehose failed for AccountId: <accountID>, FirehoseName: <firehosename> because the request is not originating from allowed source types.

    What metrics does Firehose emit for the decompression feature?

    Firehose emits metrics for decompression of every record. You should select the period (1 min), statistic (sum), date range to get the number of DecompressedRecords failed or succeeded or DecompressedBytes failed or succeeded. For more information, see CloudWatch Logs Decompression Metrics.

    CloudWatch Events

    You can configure Amazon CloudWatch to send events to a Firehose stream by adding a target to a CloudWatch Events rule.

    To create a target for a CloudWatch Events rule that sends events to an existing Firehose stream
    1. Sign in to the Amazon Web Services Management Console and open the CloudWatch console at https://console.amazonaws.cn/cloudwatch/.

    2. Choose Create rule.

    3. On the Step 1: Create rule page, for Targets, choose Add target, and then choose Firehose stream.

    4. Choose an existing Firehose stream.

    For more information about creating CloudWatch Events rules, see Getting Started with Amazon CloudWatch Events.

    Amazon IoT

    You can configure Amazon IoT to send information to a Firehose stream by adding an action.

    To create an action that sends events to an existing Firehose stream
    1. When creating a rule in the Amazon IoT console, on the Create a rule page, under Set one or more actions, choose Add action.

    2. Choose Send messages to an Amazon Kinesis Firehose stream.

    3. Choose Configure action.

    4. For Stream name, choose an existing Firehose stream.

    5. For Separator, choose a separator character to be inserted between records.

    6. For IAM role name, choose an existing IAM role or choose Create a new role.

    7. Choose Add action.

    For more information about creating Amazon IoT rules, see Amazon IoT Rule Tutorials.