Installing the Calico network policy engine add-on - Amazon EKS
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China.

Installing the Calico network policy engine add-on

Project Calico is a network policy engine for Kubernetes. With Calico network policy enforcement, you can implement network segmentation and tenant isolation. This is useful in multi-tenant environments where you must isolate tenants from each other or when you want to create separate environments for development, staging, and production. Network policies are similar to Amazon security groups in that you can create network ingress and egress rules. Instead of assigning instances to a security group, you assign network policies to pods using pod selectors and labels.

Considerations
  • Calico is not supported when using Fargate with Amazon EKS.

  • Calico adds rules to iptables on the node that may be higher priority than existing rules that you've already implemented outside of Calico. Consider adding existing iptables rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico.

  • If you're using the Amazon VPC CNI add-on version 1.10 or earlier, security groups for pods traffic flow to pods on branch network interfaces is not subjected to Calico network policy enforcement and is limited to Amazon EC2 security group enforcement only. If you're using 1.11.0 or later of the Amazon VPC CNI add-on, traffic flow to pods on branch network interfaces is subject to Calico network policy enforcement if you set POD_SECURITY_GROUP_ENFORCING_MODE=standard for the Amazon VPC CNI add-on.

  • The IP family setting for your cluster must be IPv4. You can't use the Calico network policy engine add-on if your cluster was created to use the IPv6 family.

Prerequisites
  • An existing Amazon EKS cluster. To deploy one, see Getting started with Amazon EKS.

  • The kubectl command line tool is installed on your device or Amazon CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.23, you can use kubectl version 1.22,1.23, or 1.24 with it. To install or upgrade kubectl, see Installing or updating kubectl.

The following procedure shows you how to install Calico on Linux nodes in your Amazon EKS cluster. To install Calico on Windows nodes, see Using Calico on Amazon EKS Windows Containers.

Install Calico on your Amazon EKS Linux nodes

Important

Amazon EKS doesn't maintain the manifests or charts used in the following procedures. The recommended way to install Calico on Amazon EKS is by using the Calico Operator instead of these charts or manifests. For more information, see Important Announcement: Amazon EKS will no longer maintain and update Calico charts in this repository on GitHub. If you encounter issues during installation and usage of Calico, submit issues to Calico Operator and the Calico project directly. You should always contact Tigera for compatibility of any new Calico operator and Calico versions before installing them on your cluster.

  1. You can install Calico using Helm or Kubernetes manifests.

    Helm
    Prerequisite

    Helm version 3.0 or later installed on your computer. To install or upgrade Helm, see Using Helm with Amazon EKS.

    To install Calico with Helm
    1. Add Project Calico into your Helm repository.

      helm repo add projectcalico https://docs.projectcalico.org/charts
    2. If you already have Calico added, you can update it to get the latest released version.

      helm repo update
    3. Install version 3.21.4 or later of the Tigera Calico operator and custom resource definitions.

      helm install calico projectcalico/tigera-operator --version v3.21.4
    Manifests

    Apply the Calico manifests to your cluster.

    kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml
  2. View the resources in the tigera-operator namespace.

    kubectl get all -n tigera-operator

    The example output is as follows.

    NAME READY STATUS RESTARTS AGE pod/tigera-operator-768d489967-6cv58 1/1 Running 0 27m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tigera-operator 1/1 1 1 27m NAME DESIRED CURRENT READY AGE replicaset.apps/tigera-operator-768d489967 1 1 1 27m

    The values in the DESIRED and READY columns for the replicaset should match.

  3. View the resources in the calico-system namespace.

    kubectl get all -n calico-system

    The example output is as follows.

    NAME READY STATUS RESTARTS AGE pod/calico-kube-controllers-5cd7d477df-2xqpd 1/1 Running 0 40m pod/calico-node-bm5fb 1/1 Running 0 40m pod/calico-node-wfww4 1/1 Running 0 40m pod/calico-typha-86c697dbdc-n7qff 1/1 Running 0 40m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/calico-kube-controllers-metrics ClusterIP 10.100.77.235 <none> 9094/TCP 40m service/calico-typha ClusterIP 10.100.164.91 <none> 5473/TCP 40m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 40m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/calico-kube-controllers 1/1 1 1 40m deployment.apps/calico-typha 1/1 1 1 40m NAME DESIRED CURRENT READY AGE replicaset.apps/calico-kube-controllers-5cd7d477df 1 1 1 40m replicaset.apps/calico-typha-86c697dbdc 1 1 1 40m

    The output is slightly different depending on whether you deployed using the Helm chart or manifests. The values in the DESIRED and READY columns for the calico-node daemonset should match. The values in the DESIRED and READY columns for the two replicasets should also match. The number in the DESIRED column for daemonset.apps/calico-node varies based on the number of nodes in your cluster.

  4. Confirm that the logs for one of your calico-node, calico-typha, and tigera-operator pods don't contain ERROR. Replace the values in the following commands with the values returned in your output for the previous steps.

    kubectl logs tigera-operator-768d489967-6cv58 -n tigera-operator | grep ERROR kubectl logs calico-node-bm5fb -c calico-node -n calico-system | grep ERROR kubectl logs calico-typha-86c697dbdc-n7qff -n calico-system | grep ERROR

    If no output is returned from the previous commands, then ERROR doesn't exist in your logs and everything should be running correctly.

  5. If you're using version 1.9.3 or later of the Amazon VPC CNI plugin for Kubernetes, then enable the plugin to add the pod IP address to an annotation in the calico-kube-controllers-579b45dcf-z5tsf pod spec. For more information about this setting, see ANNOTATE_POD_IP on GitHub.

    1. See which version of the plugin is installed on your cluster with the following command.

      kubectl describe daemonset aws-node -n kube-system | grep amazon-k8s-cni: | cut -d ":" -f 3

      The example output is as follows.

      v1.10.4-eksbuild.1
    2. Create a configuration file that you can apply to your cluster that grants the aws-node Kubernetes clusterrole the permission to patch pods.

      cat << EOF > append.yaml - apiGroups: - "" resources: - pods verbs: - patch EOF
    3. Apply the updated permissions to your cluster.

      kubectl apply -f <(cat <(kubectl get clusterrole aws-node -o yaml) append.yaml)
    4. Set the environment variable for the plugin.

      kubectl set env daemonset aws-node -n kube-system ANNOTATE_POD_IP=true
    5. Confirm that the annotation was added to the calico-kube-controllers-5cd7d477df-2xqpd pod. Replace 5cd7d477df-2xqpd with the ID for the pod returned in a previous step.

      kubectl describe pod calico-kube-controllers-5cd7d477df-2xqpd -n calico-system | grep vpc.amazonaws.com/pod-ips

      The example output is as follows.

      vpc.amazonaws.com/pod-ips: 192.168.25.9

Stars policy demo

This section walks through the Stars policy demo provided by the Project Calico documentation and isn't necessary for Calico functionality on your cluster. The demo creates a front-end, back-end, and client service on your Amazon EKS cluster. The demo also creates a management graphical user interface that shows the available ingress and egress paths between each service. We recommend that you complete the demo on a cluster that you don't run production workloads on.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the front-end service, and the back-end only accepts traffic from the front-end.

To run the Stars policy demo
  1. Apply the front-end, back-end, client, and management user interface services:

    kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/00-namespace.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/01-management-ui.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/02-backend.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/03-frontend.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/04-client.yaml
  2. View all pods on the cluster.

    kubectl get pods -A

    The example output is as follows.

    In your output, you should see pods in the namespaces shown in the following output. Your pod NAMES and the number of pods in the READY column are different than those in the following output. Don't continue until you see pods with similar names and they all have Running in the STATUS column.

    NAMESPACE NAME READY STATUS RESTARTS AGE ... client client-xlffc 1/1 Running 0 5m19s ... management-ui management-ui-qrb2g 1/1 Running 0 5m24s stars backend-sz87q 1/1 Running 0 5m23s stars frontend-cscnf 1/1 Running 0 5m21s ...
  3. To connect to the management user interface, forward your local port 9001 to the management-ui service running on your cluster:

    kubectl port-forward service/management-ui -n management-ui 9001
  4. Open a browser on your local system and point it to http://localhost:9001/. You should see the management user interface. The C node is the client service, the F node is the front-end service, and the B node is the back-end service. Each node has full communication access to all other nodes, as indicated by the bold, colored lines.

    
                        Open network policy
  5. Apply the following network policies to isolate the services from each other:

    kubectl apply -n stars -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/default-deny.yaml kubectl apply -n client -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/default-deny.yaml
  6. Refresh your browser. You see that the management user interface can no longer reach any of the nodes, so they don't show up in the user interface.

  7. Apply the following network policies to allow the management user interface to access the services:

    kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/allow-ui.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/allow-ui-client.yaml
  8. Refresh your browser. You see that the management user interface can reach the nodes again, but the nodes cannot communicate with each other.

    
                        UI access network policy
  9. Apply the following network policy to allow traffic from the front-end service to the back-end service:

    kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/backend-policy.yaml
  10. Refresh your browser. You see that the front-end can communicate with the back-end.

    
                        Front-end to back-end policy
  11. Apply the following network policy to allow traffic from the client to the front-end service.

    kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/frontend-policy.yaml
  12. Refresh your browser. You see that the client can communicate to the front-end service. The front-end service can still communicate to the back-end service.

    
                        Final network policy
  13. (Optional) When you are done with the demo, you can delete its resources.

    kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/04-client.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/03-frontend.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/02-backend.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/01-management-ui.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/00-namespace.yaml

    Even after deleting the resources, there can still be iptables rules on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove Calico is to terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes. If you are unable to recycle the nodes, then see Disabling and removing Calico Policy in the Calico GitHub repository for a last resort procedure.

Remove Calico

Remove Calico using the method that you installed Calico with.

Helm

Remove Calico from your cluster.

helm uninstall calico
Manifests

Remove Calico from your cluster.

kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml