Coverage for Amazon EKS clusters - Amazon GuardDuty
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Coverage for Amazon EKS clusters

After you enable Runtime Monitoring and install the GuardDuty security agent (add-on) for EKS either manually or through automated agent configuration, you can start assessing the coverage for your EKS clusters.

Reviewing coverage statistics

The coverage statistics for the EKS clusters associated with your own accounts or your member accounts is the percentage of the healthy EKS clusters over all EKS clusters in the selected Amazon Web Services Region. The following equation represents this as:

(Healthy clusters/All clusters)*100

Choose one of the access methods to review the coverage statistics for your accounts.

Console
  • Sign in to the Amazon Web Services Management Console and open the GuardDuty console at https://console.amazonaws.cn/guardduty/.

  • In the navigation pane, choose Runtime Monitoring.

  • Choose the EKS clusters runtime coverage tab.

  • Under the EKS clusters runtime coverage tab, you can view the coverage statistics aggregated by the coverage status that is available in the Clusters list table.

    • You can filter the Clusters list table by the following columns:

      • Cluster name

      • Account ID

      • Agent management type

      • Coverage status

      • Add-on version

  • If any of your EKS clusters have the Coverage status as Unhealthy, the Issue column may include additional information about the reason for the Unhealthy status.

API/CLI
  • Run the ListCoverage API with your own valid detector ID, Region, and service endpoint. You can filter and sort the cluster list using this API.

    • You can change the example filter-criteria with one of the following options for CriterionKey:

      • ACCOUNT_ID

      • CLUSTER_NAME

      • RESOURCE_TYPE

      • COVERAGE_STATUS

      • ADDON_VERSION

      • MANAGEMENT_TYPE

    • You can change the example AttributeName in sort-criteria with the following options:

      • ACCOUNT_ID

      • CLUSTER_NAME

      • COVERAGE_STATUS

      • ISSUE

      • ADDON_VERSION

      • UPDATED_AT

    • You can change the max-results (up to 50).

    • To find the detectorId for your account and current Region, see the Settings page in the https://console.amazonaws.cn/guardduty/ console, or run the ListDetectors API

    aws guardduty --region us-east-1 list-coverage --detector-id 12abc34d567e8fa901bc2d34e56789f0 --sort-criteria '{"AttributeName": "EKS_CLUSTER_NAME", "OrderBy": "DESC"}' --filter-criteria '{"FilterCriterion":[{"CriterionKey":"ACCOUNT_ID", "FilterCondition":{"EqualsValue":"111122223333"}}] }' --max-results 5
  • Run the GetCoverageStatistics API to retrieve coverage aggregated statistics based on the statisticsType.

    • You can change the example statisticsType to one of the following options:

      • COUNT_BY_COVERAGE_STATUS – Represents coverage statistics for EKS clusters aggregated by coverage status.

      • COUNT_BY_RESOURCE_TYPE – Coverage statistics aggregated based on the type of Amazon resource in the list.

      • You can change the example filter-criteria in the command. You can use the following options for CriterionKey:

        • ACCOUNT_ID

        • CLUSTER_NAME

        • RESOURCE_TYPE

        • COVERAGE_STATUS

        • ADDON_VERSION

        • MANAGEMENT_TYPE

    • To find the detectorId for your account and current Region, see the Settings page in the https://console.amazonaws.cn/guardduty/ console, or run the ListDetectors API

    aws guardduty --region us-east-1 get-coverage-statistics --detector-id 12abc34d567e8fa901bc2d34e56789f0 --statistics-type COUNT_BY_COVERAGE_STATUS --filter-criteria '{"FilterCriterion":[{"CriterionKey":"ACCOUNT_ID", "FilterCondition":{"EqualsValue":"123456789012"}}] }'

If the coverage status of your EKS cluster is Unhealthy, see Troubleshooting EKS coverage issues.

Configuring coverage status change notifications

The coverage status of an EKS cluster in your account may show up as Unhealthy. To detect when the coverage status becomes Unhealthy, we recommend you monitor the coverage status periodically and troubleshoot, if the status is Unhealthy. Alternatively, you can create an Amazon EventBridge rule to notify you when the coverage status changes from either Unhealthy to Healthy or otherwise. By default, GuardDuty publishes this in the EventBridge bus for your account.

Sample notification schema

In an EventBridge rule, you can use the pre-defined sample events and event patterns to receive coverage status notification. For more information about creating an EventBridge rule, see Create rule in the Amazon EventBridge User Guide.

Additionally, you can create a custom event pattern by using the following example notification schema. Make sure to replace the values for your account. To get notified when the coverage status of your Amazon EKS cluster changes from Healthy to Unhealthy, the detail-type should be GuardDuty Runtime Protection Unhealthy. To get notified when the coverage status changes from Unhealthy to Healthy, replace the value of detail-type with GuardDuty Runtime Protection Healthy.

{ "version": "0", "id": "event ID", "detail-type": "GuardDuty Runtime Protection Unhealthy", "source": "aws.guardduty", "account": "Amazon Web Services account ID", "time": "event timestamp (string)", "region": "Amazon Web Services Region", "resources": [ ], "detail": { "schemaVersion": "1.0", "resourceAccountId": "string", "currentStatus": "string", "previousStatus": "string", "resourceDetails": { "resourceType": "EKS", "eksClusterDetails": { "clusterName": "string", "availableNodes": "string", "desiredNodes": "string", "addonVersion": "string" } }, "issue": "string", "lastUpdatedAt": "timestamp" } }

Troubleshooting EKS coverage issues

If the coverage status for your EKS cluster is Unhealthy, you can view the corresponding error either under the Issue column in the GuardDuty console, or by using the CoverageResource data type.

When working with inclusion or exclusion tags for monitoring your EKS clusters selectively, it may take some time for the tags to sync. This may impact the coverage status of the associated EKS cluster. You can try removing and adding the corresponding tag (inclusion or exclusion) again. For more information, see Tagging your Amazon EKS resources in the Amazon EKS User Guide.

The structure of a coverage issue is Issue type:Extra information. Typically, the issues will have an optional Extra information that may include specific client-side exception or description about the issue. Based on Extra information, the following tables provide the recommended steps to troubleshoot the coverage issues for your EKS clusters.

Issue type (prefix)

Extra information

Recommended troubleshooting steps

Addon Creation Failed

Addon aws-guardduty-agent is not compatible with current cluster version of cluster ClusterName. Addon specified is not supported.

Make sure that you're using one of those Kubernetes versions that support deploying the aws-guardduty-agent EKS add-on. For more information, see Kubernetes versions supported by GuardDuty security agent. For information about updating your Kubernetes version, see Updating an Amazon EKS cluster Kubernetes version.

Addon Creation Failed

Addon Updation Failed

Addon Status Unhealthy

EKS Addon issue - AddonIssueCode: AddonIssueMessage

For information about recommended steps for a specific add-on issue code, see Troubleshooting steps for Addon creation/updatation error with Addon issue code.

For a list of addon issue codes that you might experience in this issue, see AddonIssue.

VPC Endpoint Creation Failed

VPC endpoint creation not supported for shared VPC vpcId

Runtime Monitoring now supports the use of a shared VPC within an organization. Make sure your accounts meet all the prerequisites. For more information, see Prerequisites for using shared VPC.

Only when using shared VPC with automated agent configuration

Owner account ID 111122223333 for shared VPC vpcId doesn't have either Runtime Monitoring, automated agent configuration, or both, enabled.

The shared VPC owner account must enable Runtime Monitoring and automated agent configuration for at least one resource type (Amazon EKS or Amazon ECS (Amazon Fargate)). For more information, see Prerequisites specific to GuardDuty Runtime Monitoring.

Enabling private DNS requires both enableDnsSupport and enableDnsHostnames VPC attributes set to true for vpcId (Service: Ec2, Status Code:400, Request ID: a1b2c3d4-5678-90ab-cdef-EXAMPLE11111).

Ensure that the following VPC attributes are set to trueenableDnsSupport and enableDnsHostnames. For more information, see DNS attributes in your VPC.

If you're using Amazon VPC Console at https://console.amazonaws.cn/vpc/ to create the Amazon VPC, make sure to select both Enable DNS hostnames and Enable DNS resolution. For more information, see VPC configuration options.

Shared VPC Endpoint Deletion Failed

Shared VPC endpoint deletion not allowed for account ID 111122223333, shared VPC vpcId, owner account ID 555555555555.

Potential steps:
  • Disabling the Runtime Monitoring status of the shared VPC participant account doesn't impact the shared VPC endpoint policy and the security group that exists in the owner account.

    To delete the shared VPC endpoint and security group, you must disable Runtime Monitoring or automated agent configuration status in the shared VPC owner account.

  • The shared VPC participant account can't delete the shared VPC endpoint and security group hosted in the shared VPC owner account.

Local EKS clusters

EKS addons are not supported on local outpost clusters.

Not actionable.

For more information, see Amazon EKS on Amazon outposts.

EKS Runtime Monitoring enablement permission not granted

(may or may not show extra information)

  1. If the extra information is available for this issue, fix the root cause and follow the next step.

  2. Toggle EKS Runtime Monitoring to turn it off and then turn it on again. Ensure that the GuardDuty agent also gets deployed, whether automatically through GuardDuty or manually.

EKS Runtime Monitoring enablement resource provisioning in progress

(may or may not show extra information)

Not actionable.

After you enable EKS Runtime Monitoring, the coverage status might remain Unhealthy until the resource provisioning step completes. The coverage status gets monitored and updated periodically.

Others (any other issue)

Error due to authorization failure

Toggle EKS Runtime Monitoring to turn it off and then turn it on again. Ensure that the GuardDuty agent also gets deployed, either automatically through GuardDuty or manually.

Addon creation or updation error

Troubleshooting steps

EKS Addon Issue - InsufficientNumberOfReplicas: The add-on is unhealthy because it doesn't have the desired number of replicas.

Using the issue message, you can identify and fix the root cause. You can start by describing your cluster. For example, use kubectl describe pods to identify the root cause for pod failure.

After you fix the root cause, retry the step (add-on creation or update).

EKS Addon Issue - AdmissionRequestDenied: admission webhook "validate.kyverno.svc-fail" denied the request: policy DaemonSet/amazon-guardduty/aws-guardduty-agent for resource violation: restrict-image-registries: autogen-validate-registries: ...

  1. Amazon EKS cluster or the security administrator must review the security policy that is blocking the Addon update.

  2. You must either disable the controller (webhook) or have the controller accept the requests from Amazon EKS.

EKS Addon Issue - ConfigurationConflict: Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts: DaemonSet.apps aws-guardduty-agent - .spec.template.spec.containers[name="aws-guardduty-agent"].image

When creating or updating the Addon, provide the OVERWRITE resolve conflict flag. This will potentially overwrite any changes that have been made directly to the related resources in Kubernetes by using the Kubernetes API.

You can first delete the Addon and then reinstall.

EKS Addon Issue - AccessDenied: priorityclasses.scheduling.k8s.io "aws-guardduty-agent.priorityclass" is forbidden: User "eks:addon-manager" cannot patch resource "priorityclasses" in API group "scheduling.k8s.io" at the cluster scope

You must add the missing permission to the eks:addon-cluster-admin ClusterRoleBinding manually. Add the following yaml to eks:addon-cluster-admin:

--- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: eks:addon-cluster-admin subjects: - kind: User name: eks:addon-manager apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io ---

You can now apply this yaml to your Amazon EKS cluster by using the following command:

kubectl apply -f eks-addon-cluster-admin.yaml

EKS Addon Issue - AccessDenied: admission webhook "validation.gatekeeper.sh" denied the request: [all-namespace-must-have-label-owner] All namespaces must have an `owner` label

You must either disable the controller or have the controller accept the requests from the Amazon EKS cluster.

Prior to creating or updating the add-on, you can also create a GuardDuty namespace and label it as owner.