Installing Calico on Amazon EKS - Amazon EKS
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China.

Installing Calico on Amazon EKS

Project Calico is a network policy engine for Kubernetes. With Calico network policy enforcement, you can implement network segmentation and tenant isolation. This is useful in multi-tenant environments where you must isolate tenants from each other or when you want to create separate environments for development, staging, and production. Network policies are similar to Amazon security groups in that you can create network ingress and egress rules. Instead of assigning instances to a security group, you assign network policies to pods using pod selectors and labels. The following procedure shows you how to install Calico on Linux nodes in your Amazon EKS cluster. To install Calico on Windows nodes, see Using Calico on Amazon EKS Windows Containers.

  • Calico is not supported when using Fargate with Amazon EKS.

  • Calico adds rules to iptables on the node that may be higher priority than existing rules that you've already implemented outside of Calico. Consider adding existing iptables rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico.

  • If you're using security groups for pods, traffic flow to pods on branch network interfaces is not subjected to Calico network policy enforcement and is limited to Amazon EC2 security group enforcement only. Community effort is underway to remove this limitation.

To install Calico on your Amazon EKS Linux nodes

  1. Download, modify, and apply the Calico manifests to your cluster.

    1. Download the Calico manifests with the following commands.

      curl -o calico-operator.yaml curl -o calico-crs.yaml
    2. Modify the manifests.

      1. View the manifest file or files that you downloaded and note the name of the images. Download the images locally with the following commands.

        docker pull docker pull docker pull
      2. Tag the images to be pushed to an Amazon Elastic Container Registry repository in China with the following command.

        docker tag image:<tag> <aws_account_id>.dkr.ecr.<cn-north-1><tag>
      3. Push the images to a China Amazon ECR repository with the following command.

        docker push image:<tag> <aws_account_id>.dkr.ecr.<cn-north-1><tag>
      4. Update the calico-operator.yaml file to reference the Amazon ECR image URL in your Region.

      5. Update the calico-crs.yaml file to reference the Amazon ECR image repository in your Region by adding the following to the spec.

        registry: <aws_account_id>.dkr.ecr.<cn-north-1>
    3. Apply the Calico manifests. These manifests create DaemonSets in the calico-system namespace.

      kubectl apply -f calico-operator.yaml kubectl apply -f calico-crs.yaml
  2. Watch the calico-system DaemonSets and wait for the calico-node DaemonSet to have the DESIRED number of pods in the READY state. When this happens, Calico is working.

    kubectl get daemonset calico-node --namespace calico-system



To delete Calico from your Amazon EKS cluster

If you are finished using Calico in your Amazon EKS cluster, you can delete it with the following commands:

kubectl delete -f calico-crs.yaml kubectl delete -f calico-operator.yaml

Stars policy demo

This section walks through the Stars policy demo provided by the Project Calico documentation. The demo creates a frontend, backend, and client service on your Amazon EKS cluster. The demo also creates a management GUI that shows the available ingress and egress paths between each service.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the frontend service, and the backend only accepts traffic from the frontend.

To run the Stars policy demo

  1. Apply the frontend, backend, client, and management UI services:

    kubectl apply -f kubectl apply -f kubectl apply -f kubectl apply -f kubectl apply -f
  2. Wait for all of the pods to reach the Running status:

    kubectl get pods --all-namespaces --watch
  3. To connect to the management UI, forward your local port 9001 to the management-ui service running on your cluster:

    kubectl port-forward service/management-ui -n management-ui 9001
  4. Open a browser on your local system and point it to http://localhost:9001/. You should see the management UI. The C node is the client service, the F node is the frontend service, and the B node is the backend service. Each node has full communication access to all other nodes (as indicated by the bold, colored lines).

                        Open network policy
  5. Apply the following network policies to isolate the services from each other:

    kubectl apply -n stars -f kubectl apply -n client -f
  6. Refresh your browser. You see that the management UI can no longer reach any of the nodes, so they don't show up in the UI.

  7. Apply the following network policies to allow the management UI to access the services:

    kubectl apply -f kubectl apply -f
  8. Refresh your browser. You see that the management UI can reach the nodes again, but the nodes cannot communicate with each other.

                        UI access network policy
  9. Apply the following network policy to allow traffic from the frontend service to the backend service:

    kubectl apply -f
  10. Apply the following network policy to allow traffic from the client namespace to the frontend service:

    kubectl apply -f
                        Final network policy
  11. (Optional) When you are done with the demo, you can delete its resources with the following commands:

    kubectl delete -f kubectl delete -f kubectl delete -f kubectl delete -f kubectl delete -f

    Even after deleting the resources, there can still be iptables rules on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove Calico is to terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes. If you are unable to recycle the nodes, then see Disabling and removing Calico Policy in the Calico GitHub repository for a last resort procedure.