

# Log ingestion through HTTP endpoints
Log ingestion through HTTP endpoints

Amazon CloudWatch Logs provides HTTP endpoints that allow you to send logs directly to CloudWatch Logs using simple HTTP POST requests. These endpoints support both SigV4 and bearer token authentication.

**Important**  
We recommend using SigV4 authentication for all production workloads where Amazon SDK integration is possible. SigV4 uses short-term credentials and provides the strongest security posture. Bearer token (API key) authentication is intended for scenarios where SigV4 is not feasible, such as third-party log forwarders that do not support Amazon SDK integration. For more information, see [Alternatives to long-term access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-workloads-use-roles) in the *IAM User Guide*.

CloudWatch Logs supports the following HTTP ingestion endpoints:


| Endpoint | Path | Content-Type | Format | 
| --- | --- | --- | --- | 
| [OpenTelemetry Logs](CWL_HTTP_Endpoints_OTLP.md) | /v1/logs | application/json or application/x-protobuf | OTLP JSON or Protobuf | 
| [HLC Logs](CWL_HLC_Endpoint.md) | /services/collector/event | application/json | HLC format | 
| [ND-JSON Logs](CWL_HTTP_Endpoints_NDJSON.md) | /ingest/bulk | application/json or application/x-ndjson | Newline-delimited JSON | 
| [Structured JSON Logs](CWL_HTTP_Endpoints_StructuredJSON.md) | /ingest/json | application/json | JSON object or array | 

## Common behavior


All HTTP ingestion endpoints share the following behavior:

**Authentication**

All endpoints support both SigV4 and bearer token authentication:
+ **SigV4 (recommended)** – Standard Amazon Signature Version 4 signing. Use SigV4 whenever your application or infrastructure supports the Amazon SDK or can sign requests. SigV4 uses short-term credentials and is the most secure authentication method.
+ **Bearer token** – Use the `Authorization: Bearer <ACWL token>` header.
  + Token must be a valid ACWL bearer token. For setup instructions, see [Setting up bearer token authentication](CWL_HTTP_Endpoints_BearerTokenAuth.md).
  + Requires the `logs:PutLogEvents` and `logs:CallWithBearerToken` IAM permissions.

**Log group and log stream**
+ Provided via headers: `x-aws-log-group` and `x-aws-log-stream`
+ Query parameters `?logGroup=<name>&logStream=<name>` are also supported on all endpoints except OTLP.
+ You cannot use both query parameters and headers for the same parameter.
+ Both log group and log stream are required.

**Response**
+ Success: `HTTP 200` with body `{}`
+ Validation errors: `HTTP 400`
+ Auth failures: `HTTP 401`

# Setting up bearer token authentication
Bearer token authentication

Before you can send logs using bearer token authentication with any of the HTTP ingestion endpoints, you need to:
+ Create an IAM user with CloudWatch Logs permissions
+ Generate service-specific credentials (bearer token)
+ Create a log group and log stream
+ Enable bearer token authentication on the log group

**Important**  
We recommend using SigV4 authentication with short-term credentials for all workloads where this is possible. SigV4 provides the strongest security posture. Restrict the use of API keys (bearer tokens) to scenarios where short-term credential-based authentication is not feasible. When you are ready to incorporate CloudWatch Logs into applications with greater security requirements, you should switch to short-term credentials. For more information, see [Alternatives to long-term access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-workloads-use-roles) in the *IAM User Guide*.

## Option 1: Quick start using the Amazon console


The Amazon Management Console provides a streamlined workflow to generate API keys for HTTP endpoint access.

**To set up HTTP endpoint access using the console**

1. Sign in to the Amazon Management Console.

1. Navigate to **CloudWatch** > **Settings** > **Logs**.

1. In the API Keys section, choose **Generate API key**.

1. For **API key expiration**, do one of the following:
   + Select an API key expiration duration of **1**, **5**, **30**, **90**, or **365** days.
   + Choose **Custom duration** to specify a custom API key expiration date.
   + Select **Never expires** (not recommended).

1. Choose **Generate API key**.

   The console automatically:
   + Creates a new IAM user with appropriate permissions
   + Attaches the [CloudWatchLogsAPIKeyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchLogsAPIKeyAccess.html) managed policy (includes `logs:PutLogEvents` and `logs:CallWithBearerToken` permissions)
   + Generates service-specific credentials (API key)

1. Copy and securely save the displayed credentials:
   + **API Key ID** (Service-specific credential ID)
   + **API Key Secret** (Bearer token)
**Important**  
Save the API Key Secret immediately. It cannot be retrieved later. If you lose it, you'll need to generate a new API key.

1. Create the log group and log stream where your logs will be stored:

   ```
   # Create the log group
   aws logs create-log-group \
       --log-group-name /aws/hlc-logs/my-application \
       --region us-east-1
   
   # Create the log stream
   aws logs create-log-stream \
       --log-group-name /aws/hlc-logs/my-application \
       --log-stream-name application-stream-001 \
       --region us-east-1
   ```

1. Enable bearer token authentication on the log group:

   ```
   aws logs put-bearer-token-authentication \
       --log-group-identifier /aws/hlc-logs/my-application \
       --bearer-token-authentication-enabled \
       --region us-east-1
   ```

   Verify the configuration:

   ```
   aws logs describe-log-groups \
       --log-group-name-prefix /aws/hlc-logs/my-application \
       --region us-east-1
   ```

**Permissions included:** The automatically created IAM user will have the following permissions:
+ `logs:PutLogEvents` – Send log events to CloudWatch Logs
+ `logs:CallWithBearerToken` – Authenticate using bearer token
+ `kms:Describe*`, `kms:GenerateDataKey*`, `kms:Decrypt` – Access KMS-encrypted log groups (with condition restricting to logs service)

## Option 2: Manual setup


If you prefer more control over the IAM configuration or need to customize permissions, you can set up the HTTP endpoint access manually.

### Step 1: Create an IAM user


Create an IAM user that will be used for log ingestion:

1. Sign in to the Amazon Management Console and navigate to IAM.

1. In the left navigation pane, choose **Users**.

1. Choose **Create user**.

1. Enter a user name (for example, `cloudwatch-logs-hlc-user`).

1. Choose **Next**.

1. Attach one of the following IAM policies:

   **Option A: Use the managed policy (recommended)**

   Attach the [CloudWatchLogsAPIKeyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchLogsAPIKeyAccess.html) managed policy.

   **Option B: Create a custom policy**

   Create and attach the following IAM policy:

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "LogsAPIs",
               "Effect": "Allow",
               "Action": [
                   "logs:CallWithBearerToken",
                   "logs:PutLogEvents"
               ],
               "Resource": "*"
           },
           {
               "Sid": "KMSAPIs",
               "Effect": "Allow",
               "Action": [
                   "kms:Describe*",
                   "kms:GenerateDataKey*",
                   "kms:Decrypt"
               ],
               "Condition": {
                   "StringEquals": {
                       "kms:ViaService": [
                           "logs.*.amazonaws.com"
                       ]
                   }
               },
               "Resource": "arn:aws:kms:*:*:key/*"
           }
       ]
   }
   ```

1. Choose **Next** and then **Create user**.

**Note**  
The KMS permissions are required if you plan to send logs to KMS-encrypted log groups. The condition restricts KMS access to only keys used via CloudWatch Logs service.

### Step 2: Generate service-specific credentials (API key)


Generate the CloudWatch Logs API key using the [CreateServiceSpecificCredential](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateServiceSpecificCredential.html) API. You can also use the [create-service-specific-credential](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-service-specific-credential.html) CLI command. For the credential age, you can specify a value between 1–36600 days. If you don't specify a credential age, the API key will not expire.

To generate an API key with an expiration of 30 days:

```
aws iam create-service-specific-credential \
    --user-name cloudwatch-logs-hlc-user \
    --service-name logs.amazonaws.com \
    --credential-age-days 30
```

The response is a [ServiceSpecificCredential](https://docs.aws.amazon.com/IAM/latest/APIReference/API_ServiceSpecificCredential.html) object. The `ServiceCredentialSecret` value is your CloudWatch Logs API key (bearer token).

**Important**  
Store the `ServiceCredentialSecret` value securely, as you cannot retrieve it later. If you lose it, you'll need to generate a new API key.

### Step 3: Create log group and log stream


Create the log group and log stream where your logs will be stored:

```
# Create the log group
aws logs create-log-group \
    --log-group-name /aws/hlc-logs/my-application \
    --region us-east-1

# Create the log stream
aws logs create-log-stream \
    --log-group-name /aws/hlc-logs/my-application \
    --log-stream-name application-stream-001 \
    --region us-east-1
```

### Step 4: Enable bearer token authentication


Enable bearer token authentication on the log group:

```
aws logs put-bearer-token-authentication \
    --log-group-identifier /aws/hlc-logs/my-application \
    --bearer-token-authentication-enabled \
    --region us-east-1
```

Verify the configuration:

```
aws logs describe-log-groups \
    --log-group-name-prefix /aws/hlc-logs/my-application \
    --region us-east-1
```

## Control permissions for generating and using CloudWatch Logs API keys


The generation and usage of CloudWatch Logs API keys is controlled by actions and condition keys in both the CloudWatch Logs and IAM services.

### Controlling the generation of CloudWatch Logs API keys


The [iam:CreateServiceSpecificCredential](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsidentityandaccessmanagementiam.html#awsidentityandaccessmanagementiam-actions-as-permissions) action controls the generation of a service-specific key (such as a CloudWatch Logs API key). You can scope this action to IAM users as a resource to limit the users for which a key can be generated.

You can use the following condition keys to impose conditions on the permission for the `iam:CreateServiceSpecificCredential` action:
+ [iam:ServiceSpecificCredentialAgeDays](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#ck_ServiceSpecificCredentialAgeDays) – Lets you specify, in the condition, the key's expiration time in days. For example, you can use this condition key to only allow the creation of API keys that expire within 90 days.
+ [iam:ServiceSpecificCredentialServiceName](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#ck_ServiceSpecificCredentialAgeDays) – Lets you specify, in the condition, the name of a service. For example, you can use this condition key to only allow the creation of API keys for CloudWatch Logs and not other services.

### Controlling the usage of CloudWatch Logs API keys


The `logs:CallWithBearerToken` action controls the use of a CloudWatch Logs API key. To prevent an identity from using CloudWatch Logs API keys, attach a policy that denies the `logs:CallWithBearerToken` action to the IAM user associated with the key.

### Example policies


#### Prevent an identity from generating and using CloudWatch Logs API keys


```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DenyCWLAPIKeys",
            "Effect": "Deny",
            "Action": [
                "iam:CreateServiceSpecificCredential",
                "logs:CallWithBearerToken"
            ],
            "Resource": "*"
        }
    ]
}
```

**Warning**  
This policy will prevent the creation of credentials for all Amazon services that support creating service-specific credentials. For more information, see [Service-specific credentials for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_service-specific-creds.html).

#### Prevent an identity from using CloudWatch Logs API keys


```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": "logs:CallWithBearerToken",
            "Resource": "*"
        }
    ]
}
```

#### Allow the creation of CloudWatch Logs keys only if they expire within 90 days


```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceSpecificCredential",
            "Resource": "arn:aws:iam::123456789012:user/username",
            "Condition": {
                "StringEquals": {
                    "iam:ServiceSpecificCredentialServiceName": "logs.amazonaws.com"
                },
                "NumericLessThanEquals": {
                    "iam:ServiceSpecificCredentialAgeDays": "90"
                }
            }
        }
    ]
}
```

## Rotating API keys


Regularly rotating your API keys reduces the risk of unauthorized access. We recommend establishing a rotation schedule that aligns with your organization's security policies.

### Rotation process


To rotate an API key without interrupting log delivery, follow this procedure:

1. Create a new (secondary) credential for the IAM user:

   ```
   aws iam create-service-specific-credential \
       --user-name cloudwatch-logs-hlc-user \
       --service-name logs.amazonaws.com \
       --credential-age-days 90
   ```

1. (Optional) Store the new credential in Amazon Secrets Manager for secure retrieval and automated rotation.

1. Import the new credential into your vendor's portal or update your application configuration to use the new API key.

1. Set the original credential to inactive:

   ```
   aws iam update-service-specific-credential \
       --user-name cloudwatch-logs-hlc-user \
       --service-specific-credential-id ACCA1234EXAMPLE1234 \
       --status Inactive
   ```

1. Verify that log delivery is not impacted by monitoring the `IncomingBytes` metric for your log group in CloudWatch. For more information, see [Monitoring with CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Monitoring-CloudWatch-Metrics.html).

1. After confirming successful delivery with the new key, delete the previous credential:

   ```
   aws iam delete-service-specific-credential \
       --service-specific-credential-id ACCA1234EXAMPLE1234
   ```

### Monitoring key expiration


To check the creation date and status of your existing API keys, use the [list-service-specific-credentials](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/list-service-specific-credentials.html) command:

```
aws iam list-service-specific-credentials \
    --user-name cloudwatch-logs-hlc-user \
    --service-name logs.amazonaws.com
```

The response includes `CreateDate` and `Status` for each credential. Use this information to identify keys that are approaching expiration or have been active longer than your rotation policy allows.

## Responding to a compromised API key


If you suspect that an API key has been compromised, take the following steps immediately:

1. **Deactivate the key immediately** to prevent further unauthorized use:

   ```
   aws iam update-service-specific-credential \
       --user-name cloudwatch-logs-hlc-user \
       --service-specific-credential-id ACCA1234EXAMPLE1234 \
       --status Inactive
   ```

1. **Review CloudTrail logs** to determine the scope of unauthorized access. See [Logging API key usage with CloudTrail](#CWL_HTTP_Endpoints_CloudTrail_Logging) for how to enable auditing of API key usage.

1. **Create a replacement key** following the rotation process described in [Rotation process](#CWL_HTTP_Endpoints_Rotation_Process).

1. **Delete the compromised key** after the replacement is in place:

   ```
   aws iam delete-service-specific-credential \
       --service-specific-credential-id ACCA1234EXAMPLE1234
   ```

1. **Attach a deny policy** if you need to immediately block all bearer token access for the IAM user while you investigate:

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": {
           "Effect": "Deny",
           "Action": "logs:CallWithBearerToken",
           "Resource": "*"
       }
   }
   ```

**Note**  
To carry out these actions through the API, you must authenticate with Amazon credentials and not with a CloudWatch Logs API key.

You can also use the following IAM API operations to manage compromised keys:
+ [ResetServiceSpecificCredential](https://docs.aws.amazon.com/IAM/latest/APIReference/API_ResetServiceSpecificCredential.html) – Reset the key to generate a new password without deleting the credential. The key must not have expired.

## Security best practices for API keys


Follow these best practices to protect your CloudWatch Logs API keys:
+ **Never embed API keys in source code.** Do not hard-code API keys in application code or commit them to version control systems. If a key is accidentally committed to a public repository, Amazon automated scanning may flag it and you should rotate the key immediately.
+ **Use a secrets manager.** Store API keys in [Amazon Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) or an equivalent secrets management solution. This enables centralized access control, audit logging, and automated rotation.
+ **Set an expiration on all keys.** Always specify a `--credential-age-days` value when creating API keys. To enforce a maximum key lifetime across your organization, use the `iam:ServiceSpecificCredentialAgeDays` IAM condition key. For an example, see [Allow the creation of CloudWatch Logs keys only if they expire within 90 days](#CWL_HTTP_Endpoints_Allow_Expire_90).
+ **Apply least-privilege permissions.** Scope the IAM user's permissions to only the log groups and actions required. Use the managed [CloudWatchLogsAPIKeyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchLogsAPIKeyAccess.html) policy as a starting point and restrict further as needed.
+ **Enable CloudTrail logging.** Audit API key usage by enabling CloudTrail data events for `AWS::Logs::LogGroupAuthorization`. See [Logging API key usage with CloudTrail](#CWL_HTTP_Endpoints_CloudTrail_Logging).
+ **Monitor with IAM Access Analyzer.** Use [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) to identify unused credentials and overly permissive policies associated with your API key IAM users.
+ **Rotate keys regularly.** Establish a rotation schedule and follow the process described in [Rotating API keys](#CWL_HTTP_Endpoints_Rotating_Keys).

## Logging API key usage with CloudTrail


You can use Amazon CloudTrail to log data events for CloudWatch Logs API key usage. CloudWatch Logs emits `AWS::Logs::LogGroupAuthorization` data events for `CallWithBearerToken` calls, enabling you to audit when and how API keys are used to send logs.

To enable CloudTrail logging for CloudWatch Logs API key usage:

**Note**  
The S3 bucket that you specify for the trail must have a bucket policy that allows CloudTrail to write log files to it. For more information, see [Amazon S3 bucket policy for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-s3-bucket-policy-for-cloudtrail.html).

1. Create a trail:

   ```
   aws cloudtrail create-trail \
       --name cloudwatch-logs-api-key-audit \
       --s3-bucket-name my-cloudtrail-bucket \
       --region us-east-1
   ```

1. Configure advanced event selectors to capture CloudWatch Logs log group authorization events:

   ```
   aws cloudtrail put-event-selectors \
       --region us-east-1 \
       --trail-name cloudwatch-logs-api-key-audit \
       --advanced-event-selectors '[{
           "Name": "CloudWatch Logs API key authorization events",
           "FieldSelectors": [
               { "Field": "eventCategory", "Equals": ["Data"] },
               { "Field": "resources.type", "Equals": ["AWS::Logs::LogGroupAuthorization"] }
           ]
       }]'
   ```

1. Start trail logging:

   ```
   aws cloudtrail start-logging \
       --name cloudwatch-logs-api-key-audit \
       --region us-east-1
   ```

# Sending logs using the OTLP endpoint (OpenTelemetry Logs)
OTLP endpoint

The OpenTelemetry Logs endpoint (`/v1/logs`) accepts OpenTelemetry Protocol (OTLP) log data in either JSON or Protobuf encoding. For detailed information about the OTLP endpoint, including configuration and usage, see [Send metrics and traces to CloudWatch with OpenTelemetry](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-OTLPEndpoint.html).

If you are using bearer token authentication, complete the setup steps in [Setting up bearer token authentication](CWL_HTTP_Endpoints_BearerTokenAuth.md) before proceeding.

## Request format

+ Method: `POST`
+ Content-Type: `application/json` or `application/x-protobuf`
+ Log group: `x-aws-log-group` header only (query parameter not supported)
+ Log stream: `x-aws-log-stream` header

## Example request


```
curl -X POST "https://logs.<region>.amazonaws.com/v1/logs" \
  -H "Authorization: Bearer ACWL<token>" \
  -H "Content-Type: application/json" \
  -H "x-aws-log-group: MyLogGroup" \
  -H "x-aws-log-stream: MyLogStream" \
  -d '{
  "resourceLogs": [
    {
      "resource": {
        "attributes": [
          {
            "key": "service.name",
            "value": { "stringValue": "my-service" }
          }
        ]
      },
      "scopeLogs": [
        {
          "scope": {
            "name": "my-library",
            "version": "1.0.0"
          },
          "logRecords": [
            {
              "timeUnixNano": "1741900000000000000",
              "severityNumber": 9,
              "severityText": "INFO",
              "body": {
                "stringValue": "User logged in successfully"
              },
              "attributes": [
                {
                  "key": "user.id",
                  "value": { "stringValue": "12345" }
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}'
```

## Responses


**Success (all events accepted):**

```
HTTP 200 OK
{}
```

**Partial success (some events rejected):**

```
{
  "partialSuccess": {
    "rejectedLogRecords": 5,
    "errorMessage": "{\"tooOldLogEventCount\": 3, \"tooNewLogEventCount\": 1, \"expiredLogEventCount\": 1}"
  }
}
```

When the request Content-Type is `application/x-protobuf`, the response is returned as a serialized `ExportLogsServiceResponse` protobuf message with the same fields.

## OTLP-specific behaviors


The following behaviors are specific to the OTLP endpoint and are not present on the other HTTP ingestion endpoints:
+ **Retry-After header** – Included on 503 and 429 responses to indicate when the client should retry.

# Sending logs using the HLC endpoint (HLC Logs)
HLC endpoint

The HLC Logs endpoint (`/services/collector/event`) is based on the HTTP Log Collector (HLC) format.

If you are using bearer token authentication, complete the setup steps in [Setting up bearer token authentication](CWL_HTTP_Endpoints_BearerTokenAuth.md) before proceeding.

## Input modes


Each event is a JSON object with a required `"event"` field. Optional metadata fields: `"time"`, `"host"`, `"source"`, `"sourcetype"`, `"index"`.

**Single event:**

```
{"event":"Hello world!","time":1486683865.0}
```

**JSON array of events:**

```
[
  {"event":"msg1","time":1486683865.0},
  {"event":"msg2","time":1486683866.0}
]
```

**Concatenated/batched events (no array wrapper):**

```
{"event":"msg1","time":1486683865.0}{"event":"msg2","time":1486683866.0}
```

## Event field (required)


The `"event"` field is mandatory. Its value can be any JSON type:

```
{"event":"a string message"}
{"event":{"message":"structured data","severity":"INFO"}}
{"event":42}
{"event":true}
```

Objects without an `"event"` field are silently skipped:

```
{"message":"this is skipped — no event field"}
```

## Time field (optional)


The `"time"` field is in epoch seconds (not milliseconds), with optional decimal for sub-second precision.


| Format | Example | Interpreted as | 
| --- | --- | --- | 
| Float | "time":1486683865.500 | 1486683865500 ms | 
| Integer | "time":1486683865 | 1486683865000 ms | 
| String (float) | "time":"1486683865.500" | 1486683865500 ms | 
| String (integer) | "time":"1486683865" | 1486683865000 ms | 
| Missing | (no time field) | Server current time | 
| Invalid | "time":"invalid" | Server current time | 

## Content-Type


Only `application/json` is accepted.

## Accepted JSON value types



| Top-level type | Behavior | 
| --- | --- | 
| Object with "event" | Accepted | 
| Object without "event" | Skipped | 
| Array of objects | Each element processed individually | 
| Concatenated objects | Each object processed individually | 
| Primitive (string, number, boolean, null) | Skipped | 

## Endpoint format


The HLC endpoint URL follows this format:

```
https://logs.<region>.amazonaws.com/services/collector/event?logGroup=<name>&logStream=<name>[&entityName=<name>&entityEnvironment=<environment>]
```

**Required parameters:**
+ `<region>` – Amazon Region (for example, `us-east-1`, `eu-west-1`)
+ `logGroup` – URL-encoded log group name
+ `logStream` – URL-encoded log stream name

**Optional parameters:**

You can optionally associate your log events with a `Service` entity by including the following query parameters. Because logs sent through the HLC endpoint are custom telemetry, they are not automatically associated with an entity. By providing these parameters, CloudWatch Logs creates an entity with `KeyAttributes.Type` set to `Service` and associates it with your log events. This enables the **Explore related** feature in CloudWatch to correlate these logs with other telemetry (metrics, traces, and logs) from the same service, making it easier to troubleshoot and monitor your applications across different signal types. For more information about entities and related telemetry, see [Adding related information to custom telemetry](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/adding-your-own-related-telemetry.html).
+ `entityName` – The name of the service entity to associate with the log events. This value is stored as the entity `KeyAttributes.Name` (for example, `my-application` or `api.myservice.com`).
+ `entityEnvironment` – The environment where the service is hosted or what it belongs to. This value is stored as the entity `KeyAttributes.Environment` (for example, `production`, `ec2:default`, or `eks:my-cluster/default`).

## Request format


Send logs using HTTP POST with the following headers and body:

**Headers:**
+ `Authorization: Bearer <your-bearer-token>`
+ `Content-Type: application/json`

**Body format:**

The request body should be in JSON format with an array of events:

```
{
    "event": [
        {
            "time": 1730141374.001,
            "event": "Application started successfully",
            "host": "web-server-1",
            "source": "application.log",
            "severity": "info"
        },
        {
            "time": 1730141374.457,
            "event": "User login successful",
            "host": "web-server-1",
            "source": "auth.log",
            "user": "john.doe"
        }
    ]
}
```

**Field descriptions:**
+ `time` – Unix epoch timestamp in seconds, with optional decimal for sub-second precision (optional)
+ `event` – The log message or event data (required)
+ `host` – Source hostname or identifier (optional)
+ `source` – Log source identifier (optional)

Additional custom fields can be included as needed.

## Example request


```
curl -X POST "https://logs.<region>.amazonaws.com/services/collector/event?logGroup=MyLogGroup&logStream=MyStream" \
  -H "Authorization: Bearer ACWL<token>" \
  -H "Content-Type: application/json" \
  -d '{"event":{"message":"User logged in","user_id":"u-123"},"time":1486683865.0,"host":"web-01","source":"auth-service"}'
```

## Best practices


### Batching events


For better performance and efficiency:
+ Batch multiple events in a single request when possible
+ Recommended batch size: 10–100 events per request
+ Maximum request size: 1 MB

### Error handling


Implement proper error handling in your application. Common HTTP status codes:
+ `200 OK` – Logs successfully ingested
+ `400 Bad Request` – Invalid request format or parameters
+ `401 Unauthorized` – Invalid or expired bearer token
+ `403 Forbidden` – Insufficient permissions
+ `404 Not Found` – Log group or stream doesn't exist
+ `429 Too Many Requests` – Rate limit exceeded
+ `500 Internal Server Error` – Service error (retry with exponential backoff)

## Limitations

+ Maximum event size: 256 KB per event
+ Maximum request size: 1 MB
+ Maximum events per request: 10,000
+ Log group names must follow CloudWatch Logs naming conventions
+ Bearer token authentication must be enabled on the log group if bearer token authentication is used.

# Sending logs using the NDJSON endpoint (ND-JSON Logs)
NDJSON endpoint

The ND-JSON Logs endpoint (`/ingest/bulk`) accepts logs in [NDJSON (Newline Delimited JSON)](https://github.com/ndjson/ndjson-spec) format. Each line contains exactly one JSON value, separated by newline characters.

If you are using bearer token authentication, complete the setup steps in [Setting up bearer token authentication](CWL_HTTP_Endpoints_BearerTokenAuth.md) before proceeding.

## Request format


Send one JSON value per line, separated by `\n` (LF) or `\r\n` (CRLF). Empty lines are silently ignored.

```
{"timestamp":1771007942000,"message":"event one","level":"INFO"}
{"timestamp":1771007943000,"message":"event two","level":"ERROR"}
{"timestamp":1771007944000,"message":"event three","level":"DEBUG"}
```

Both `application/json` and `application/x-ndjson` are accepted as the Content-Type.

## Accepted JSON value types


Per the NDJSON spec (RFC 8259), any valid JSON value is accepted on each line.

**JSON objects (most common):**

```
{"timestamp":1771007942000,"message":"User logged in","service":"auth"}
{"timestamp":1771007943000,"error":"Connection timeout","service":"api"}
```

**JSON arrays (flattened into individual events):**

```
[{"timestamp":1000,"message":"a"},{"timestamp":2000,"message":"b"}]
```

This single line produces 2 events. Each array element becomes a separate log event.

**Primitive values:**

```
"a plain string log message"
42
true
null
```

Each primitive becomes its own event with the server's current timestamp.

**Mixed types:**

```
{"timestamp":1771007942000,"message":"structured event"}
"unstructured string message"
42
{"timestamp":1771007943000,"error":"something failed"}
```

All 4 lines are accepted as valid events.


| Line content | Behavior | 
| --- | --- | 
| JSON object | Accepted, timestamp extracted if present | 
| JSON array | Flattened – each element becomes a separate event | 
| Empty array [] | Accepted, produces 0 events | 
| JSON string | Accepted as event message | 
| JSON number | Accepted as event message | 
| JSON boolean | Accepted as event message | 
| JSON null | Accepted as event message | 
| Invalid JSON | Skipped (counted, processing continues) | 
| Empty line | Ignored (not counted as skipped) | 

## Timestamp field


The `"timestamp"` field is in epoch milliseconds (not seconds).


| Format | Example | Interpreted as | 
| --- | --- | --- | 
| Numeric (millis) | "timestamp":1771007942000 | 1771007942000 ms | 
| Missing | (no timestamp field) | Server current time | 
| Non-numeric | "timestamp":"invalid" | Server current time | 
| Non-object line | "hello", 42, true | Server current time | 

## Invalid lines


Lines that are not valid JSON are silently skipped and counted. Processing continues with the next line.

```
{"message":"valid event"}
this is not valid json
{"message":"another valid event"}
```

Result: 2 events ingested, 1 skipped. Returns `HTTP 200`.

If all lines are invalid, returns `HTTP 400` with `"All events were invalid"`.

## Example request


```
curl -X POST "https://logs.<region>.amazonaws.com/ingest/bulk?logGroup=MyLogGroup&logStream=MyStream" \
  -H "Authorization: Bearer ACWL<token>" \
  -H "Content-Type: application/x-ndjson" \
  -d '{"timestamp":1771007942000,"message":"User logged in","level":"INFO"}
{"timestamp":1771007943000,"message":"Query took 42ms","level":"DEBUG"}
{"timestamp":1771007944000,"error":"Connection refused","level":"ERROR"}'
```

## Responses


**Success (all events accepted):**

```
HTTP 200 OK
{}
```

**Partial success (some events rejected):**

```
{
  "partialSuccess": {
    "rejectedLogRecords": 5,
    "errorMessage": "{\"tooOldLogEventCount\": 3, \"tooNewLogEventCount\": 1, \"expiredLogEventCount\": 1}"
  }
}
```

The `rejectedLogRecords` field is the total number of rejected events. The `errorMessage` field contains a JSON-encoded breakdown by rejection reason:
+ `tooOldLogEventCount` – Events with timestamps older than the retention period
+ `tooNewLogEventCount` – Events with timestamps too far in the future
+ `expiredLogEventCount` – Events that expired during processing

## Best practices


### Batching events


For better performance and efficiency:
+ Batch multiple events in a single request when possible
+ Recommended batch size: 10–100 events per request
+ Maximum request size: 1 MB

### Error handling


Implement proper error handling in your application. Common HTTP status codes:
+ `200 OK` – Logs successfully ingested
+ `400 Bad Request` – Invalid request format or parameters
+ `401 Unauthorized` – Invalid or expired bearer token
+ `403 Forbidden` – Insufficient permissions
+ `404 Not Found` – Log group or stream doesn't exist
+ `429 Too Many Requests` – Rate limit exceeded
+ `500 Internal Server Error` – Service error (retry with exponential backoff)

## Limitations

+ Maximum event size: 256 KB per event
+ Maximum request size: 1 MB
+ Maximum events per request: 10,000
+ Log group names must follow CloudWatch Logs naming conventions
+ Bearer token authentication must be enabled on the log group if bearer token authentication is used.

# Sending logs using the Structured JSON endpoint (Structured JSON Logs)
Structured JSON endpoint

The Structured JSON Logs endpoint (`/ingest/json`) accepts standard JSON – either a single JSON object or a JSON array of objects. This endpoint is designed for structured log data where each event is a JSON object.

If you are using bearer token authentication, complete the setup steps in [Setting up bearer token authentication](CWL_HTTP_Endpoints_BearerTokenAuth.md) before proceeding.

## Request format


Only `application/json` is accepted as the Content-Type.

**Single JSON object:**

```
{"timestamp":1771007942000,"message":"single event","level":"INFO"}
```

**JSON array of objects:**

```
[
  {"timestamp":1771007942000,"message":"event one","level":"INFO"},
  {"timestamp":1771007943000,"message":"event two","level":"ERROR"}
]
```

## Accepted JSON value types


This endpoint is strict – only JSON objects are accepted as events.


| Input | Behavior | 
| --- | --- | 
| Single JSON object | Accepted as one event | 
| JSON array of objects | Each object becomes a separate event | 
| Empty array [] | Accepted, produces 0 events | 
| Non-object in array (string, number, etc.) | Skipped | 
| Top-level primitive ("hello", 42) | Skipped | 
| Concatenated objects \$1...\$1\$1...\$1 | Only first object parsed | 

**Example – array with mixed types:**

```
[
  {"timestamp":1771007942000,"message":"valid object"},
  "just a string",
  42,
  {"timestamp":1771007943000,"message":"another valid object"}
]
```

Result: 2 events ingested (the objects), 2 skipped (the string and number).

## Timestamp field


The `"timestamp"` field is in epoch milliseconds, same as the NDJSON endpoint.


| Format | Example | Interpreted as | 
| --- | --- | --- | 
| Numeric (millis) | "timestamp":1771007942000 | 1771007942000 ms | 
| Missing | (no timestamp field) | Server current time | 
| Non-numeric | "timestamp":"invalid" | Server current time | 

## Example request


```
curl -X POST "https://logs.<region>.amazonaws.com/ingest/json?logGroup=MyLogGroup&logStream=MyStream" \
  -H "Authorization: Bearer ACWL<token>" \
  -H "Content-Type: application/json" \
  -d '[{"timestamp":1771007942000,"message":"User logged in","user_id":"u-123"},{"timestamp":1771007943000,"message":"Order placed","order_id":"o-456"}]'
```

## Responses


**Success (all events accepted):**

```
HTTP 200 OK
{}
```

**Partial success (some events rejected):**

```
{
  "partialSuccess": {
    "rejectedLogRecords": 5,
    "errorMessage": "{\"tooOldLogEventCount\": 3, \"tooNewLogEventCount\": 1, \"expiredLogEventCount\": 1}"
  }
}
```

The `rejectedLogRecords` field is the total number of rejected events. The `errorMessage` field contains a JSON-encoded breakdown by rejection reason:
+ `tooOldLogEventCount` – Events with timestamps older than the retention period
+ `tooNewLogEventCount` – Events with timestamps too far in the future
+ `expiredLogEventCount` – Events that expired during processing

## Best practices


### Batching events


For better performance and efficiency:
+ Batch multiple events in a single request when possible
+ Recommended batch size: 10–100 events per request
+ Maximum request size: 1 MB

### Error handling


Implement proper error handling in your application. Common HTTP status codes:
+ `200 OK` – Logs successfully ingested
+ `400 Bad Request` – Invalid request format or parameters
+ `401 Unauthorized` – Invalid or expired bearer token
+ `403 Forbidden` – Insufficient permissions
+ `404 Not Found` – Log group or stream doesn't exist
+ `429 Too Many Requests` – Rate limit exceeded
+ `500 Internal Server Error` – Service error (retry with exponential backoff)

## Limitations

+ Maximum event size: 256 KB per event
+ Maximum request size: 1 MB
+ Maximum events per request: 10,000
+ Log group names must follow CloudWatch Logs naming conventions
+ Bearer token authentication must be enabled on the log group if bearer token authentication is used.

## Comparison of HTTP ingestion endpoints



| Feature | HLC Logs | ND-JSON Logs | Structured JSON Logs | OpenTelemetry Logs | 
| --- | --- | --- | --- | --- | 
| Path | /services/collector/event | /ingest/bulk | /ingest/json | /v1/logs | 
| Content-Type | application/json | application/json or application/x-ndjson | application/json | application/json or application/x-protobuf | 
| Timestamp field | "time" (seconds) | "timestamp" (milliseconds) | "timestamp" (milliseconds) | "timeUnixNano" (nanoseconds) | 
| Required fields | "event" | None | None | OTLP structure ("resourceLogs") | 
| Partial success response | No | Yes | Yes | Yes | 
| Query parameter support | Yes | Yes | Yes | No (headers only) | 
| Entity metadata | Yes | Yes | Yes | No | 
| Accepts primitives | No | Yes | No | No | 
| Line-based parsing | No | Yes | No | No | 
| Protobuf support | No | No | No | Yes | 
| Retry-After header | No | No | No | Yes | 

## Choosing an endpoint

+ **Using HLC format?** Use HLC Logs. Your existing HLC payloads work with minimal changes.
+ **Streaming line-by-line logs?** Use ND-JSON Logs. Best for log pipelines that emit one event per line. Most flexible – accepts any JSON value type.
+ **Sending structured JSON payloads?** Use Structured JSON Logs. Best for applications that produce well-formed JSON objects or arrays.
+ **Already using OpenTelemetry?** Use OpenTelemetry Logs. Accepts OTLP JSON or Protobuf format and supports partial success responses with retry semantics.