Installing the Calico network policy engine add-on
Project Calico
Considerations
-
Calico is not supported when using Fargate with Amazon EKS.
-
Calico adds rules to
iptables
on the node that may be higher priority than existing rules that you've already implemented outside of Calico. Consider adding existingiptables
rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico. -
If you're using the Amazon VPC CNI add-on version
1.10
or earlier, security groups for Pods traffic flow to Pods on branch network interfaces is not subjected to Calico network policy enforcement and is limited to Amazon EC2 security group enforcement only. If you're using1.11.0
or later of the Amazon VPC CNI add-on, traffic flow to Pods on branch network interfaces is subject to Calico network policy enforcement if you setPOD_SECURITY_GROUP_ENFORCING_MODE
=standard
for the Amazon VPC CNI add-on. -
The IP family setting for your cluster must be
IPv4
. You can't use the Calico network policy engine add-on if your cluster was created to use theIPv6
family.
Prerequisites
-
An existing Amazon EKS cluster. To deploy one, see Getting started with Amazon EKS.
-
The
kubectl
command line tool is installed on your device or Amazon CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is1.26
, you can usekubectl
version1.25
,1.26
, or1.27
with it. To install or upgradekubectl
, see Installing or updating kubectl.
The following procedure shows you how to install Calico on
Linux nodes in your Amazon EKS cluster. To install Calico on
Windows nodes, see Using
Calico on Amazon EKS Windows Containers
Install Calico on your Amazon EKS Linux nodes
Important
Amazon EKS doesn't maintain the charts used in the following procedures. The
recommended way to install Calico on Amazon EKS is by using the Calico Operator
Prerequisite
Helm version 3.0
or later installed on your computer. To install or
upgrade Helm, see Using Helm with Amazon EKS.
To install Calico using Helm
-
Install Calico version
3.25
using the Tigera instructions. For more information, see Install Calicoin the Calico documentation. -
View the resources in the
tigera-operator
namespace.kubectl get all -n tigera-operator
The example output is as follows.
NAME READY STATUS RESTARTS AGE pod/tigera-operator-
768d489967
-6cv58
1/1 Running 0 27m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tigera-operator 1/1 1 1 27m NAME DESIRED CURRENT READY AGE replicaset.apps/tigera-operator-768d489967
1 1 1 27mThe values in the
DESIRED
andREADY
columns for thereplicaset
should match. -
View the resources in the
calico-system
namespace.kubectl get all -n calico-system
The example output is as follows.
NAME READY STATUS RESTARTS AGE pod/calico-kube-controllers-
55c98678
-gh6cc
1/1 Running 0 4m29s pod/calico-node-khw4w
1/1 Running 0 4m29s pod/calico-node-rrz8k
1/1 Running 0 4m29s pod/calico-typha-696bcd55cb
-49prr
1/1 Running 0 4m29s pod/csi-node-driver-6v2z5
2/2 Running 0 4m29s pod/csi-node-driver-wrw2d
2/2 Running 0 4m29s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/calico-kube-controllers-metrics ClusterIP None <none> 9094/TCP 4m23s service/calico-typha ClusterIP 10.100.67.39 <none> 5473/TCP 4m30s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 4m29s daemonset.apps/csi-node-driver 2 2 2 2 2 kubernetes.io/os=linux 4m29s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/calico-kube-controllers 1/1 1 1 4m29s deployment.apps/calico-typha 1/1 1 1 4m29s NAME DESIRED CURRENT READY AGE replicaset.apps/calico-kube-controllers-55c98678
1 1 1 4m29s replicaset.apps/calico-typha-696bcd55cb
1 1 1 4m29sThe values in the
DESIRED
andREADY
columns for thecalico-node
daemonset
should match. The values in theDESIRED
andREADY
columns for the tworeplicasets
should also match. The number in theDESIRED
column fordaemonset.apps/calico-node
varies based on the number of nodes in your cluster. -
Confirm that the logs for one of your
calico-node
,calico-typha
, andtigera-operator
Pods don't containERROR
. Replace the values in the following commands with the values returned in your output for the previous steps.kubectl logs tigera-operator-
768d489967
-6cv58
-n tigera-operator | grep ERROR kubectl logs calico-node-khw4w
-c calico-node -n calico-system | grep ERROR kubectl logs calico-typha-696bcd55cb
-49prr
-n calico-system | grep ERRORIf no output is returned from the previous commands, then
ERROR
doesn't exist in your logs and everything should be running correctly. -
If you're using version
1.9.3
or later of the Amazon VPC CNI plugin for Kubernetes, then enable the plugin to add the Pod IP address to an annotation in thecalico-kube-controllers-
Pod spec. For more information about this setting, see55c98678
-gh6cc
ANNOTATE_POD_IP
on GitHub. -
See which version of the plugin is installed on your cluster with the following command.
kubectl describe daemonset aws-node -n kube-system | grep amazon-k8s-cni: | cut -d ":" -f 3
The example output is as follows.
v1.12.2-eksbuild.1
-
Create a configuration file that you can apply to your cluster that grants the
aws-node
Kubernetesclusterrole
the permission to patch Pods.cat << EOF > append.yaml - apiGroups: - "" resources: - pods verbs: - patch EOF
-
Apply the updated permissions to your cluster.
kubectl apply -f <(cat <(kubectl get clusterrole aws-node -o yaml) append.yaml)
-
Set the environment variable for the plugin.
kubectl set env daemonset aws-node -n kube-system ANNOTATE_POD_IP=true
-
Delete the
calico-kube-controllers-
55c98678
-gh6cc
kubectl delete pod calico-kube-controllers-
55c98678
-gh6cc
-n calico-system -
View the Pods in the
calico-system
namespace again to see the ID of the newcalico-kube-controllers
Pod that Kubernetes replaced thecalico-kube-controllers-
Pod that you deleted in the previous step with.55c98678
-gh6cc
kubectl get pods -n calico-system
-
Confirm that the
vpc.amazonaws.com/pod-ips
annotation is added to the newcalico-kube-controllers
Pod.-
Replace
5cd7d477df
-2xqpd
with the ID for the Pod returned in a previous step.kubectl describe pod calico-kube-controllers-
5cd7d477df
-2xqpd
-n calico-system | grep vpc.amazonaws.com/pod-ipsThe example output is as follows.
vpc.amazonaws.com/pod-ips:
192.168.25.9
-
-
Stars policy demo
This section walks through the Stars policy demo
Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the front-end service, and the back-end only accepts traffic from the front-end.
To run the Stars policy demo
-
Apply the front-end, back-end, client, and management user interface services:
kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/00-namespace.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/01-management-ui.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/02-backend.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/03-frontend.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/04-client.yaml
-
View all Pods on the cluster.
kubectl get pods -A
The example output is as follows.
In your output, you should see Pods in the namespaces shown in the following output. Your Pod
NAMES
and the number of Pods in the
column are different than those in the following output. Don't continue until you see Pods with similar names and they all haveREADY
Running
in theSTATUS
column.NAMESPACE NAME READY STATUS RESTARTS AGE [...] client client-
xlffc
1/1
Running 05m19s
[...] management-ui management-ui-qrb2g
1/1
Running 05m24s
stars backend-sz87q
1/1
Running 05m23s
stars frontend-cscnf
1/1
Running 05m21s
[...] -
To connect to the management user interface, forward your local port 9001 to the
management-ui
service running on your cluster:kubectl port-forward service/management-ui -n management-ui 9001
-
Open a browser on your local system and point it to
http://localhost:9001/
. You should see the management user interface. The C node is the client service, the F node is the front-end service, and the B node is the back-end service. Each node has full communication access to all other nodes, as indicated by the bold, colored lines. -
Apply the following network policies to isolate the services from each other:
kubectl apply -n stars -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/default-deny.yaml kubectl apply -n client -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/default-deny.yaml
-
Refresh your browser. You see that the management user interface can no longer reach any of the nodes, so they don't show up in the user interface.
-
Apply the following network policies to allow the management user interface to access the services:
kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/allow-ui.yaml kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/allow-ui-client.yaml
-
Refresh your browser. You see that the management user interface can reach the nodes again, but the nodes cannot communicate with each other.
-
Apply the following network policy to allow traffic from the front-end service to the back-end service:
kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/backend-policy.yaml
-
Refresh your browser. You see that the front-end can communicate with the back-end.
-
Apply the following network policy to allow traffic from the client to the front-end service.
kubectl apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/policies/frontend-policy.yaml
-
Refresh your browser. You see that the client can communicate to the front-end service. The front-end service can still communicate to the back-end service.
-
(Optional) When you are done with the demo, you can delete its resources.
kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/04-client.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/03-frontend.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/02-backend.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/01-management-ui.yaml kubectl delete -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/stars-policy/manifests/00-namespace.yaml
Even after deleting the resources, there can still be
iptables
rules on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove Calico is to terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes.
Remove Calico
Remove Calico from your cluster using Helm.
helm uninstall calico