Using Alternative Provisioners

STOP Please note: this EKS and Karpenter workshop version is now deprecated since the launch of Karpenter v1beta, and has been updated to a new home on AWS Workshop Studio here: Karpenter: Amazon EKS Best Practice and Cloud Cost Optimization.

This workshop remains here for reference to those who have used this workshop before, or those who want to reference this workshop for running Karpenter on version v1alpha5.

So far we have seen some advanced use cases of Karpenter. In this section we will see how Karpenter can define different Provisioners. This allows to handle different configurations.

Each Provisioner CRD (Custom Resource Definition) provides a set of unique configurations that define the resources it supports as well as labels and taints that will also be applied to the new resources created by that Provisioner. In large clusters with multiple applications, new applications may need to create nodes with specific Taints or specific labels. In these scenarios you can configure alternative Provisioners. For this workshop we have already defined a team1 Provisioner. You can list the available Provisioners by running the following command:

kubectl get provisioners

Creating a Deployment that uses the team1 Provisioner.

Let’s create a new deployment this time, let’s force the deployment to use the team1 provisioner.

cat <<EOF > inflate-team1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate-team1
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate-team1
  template:
    metadata:
      labels:
        app: inflate-team1
    spec:
      nodeSelector:
        intent: apps
        kubernetes.io/arch: amd64
        karpenter.sh/provisioner-name: team1
      containers:
      - image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
        name: inflate-team1
        resources:
          requests:
            cpu: "1"
            memory: 256M
      tolerations:
      - key: team1
        operator: Exists
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app: inflate-team1
        maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
EOF
kubectl apply -f inflate-team1.yaml

There seem to be quite a few new entries in this section! Let’s cover a few of those. The rest we will discuss in the Challenge section.

  • The deployment sets the NodeSelector intent: apps so that this application does not overlap with the Managed Node group created with the cluster.

  • The Node Selector karpenter.sh/provisioner-name: team1 is also set. This NodeSelector is used by Karpenter to select which Provisioner configuration should be used when Provisioning capacity for this deployment. This allow different provisioner to for example define different Taint sections.

  • The NodeSelector kubernetes.io/arch has been set to use x86_64 amd64 instances.

  • If you recall well the team1 provisioner had a section that defined a team1 taint. The deployment must add also that toleration so it’s allowed to be placed in the newly created instances with the Taint team1: NoSchedule. Note the NoSchedule means that only applications that have the toleration for team1 will be allowed on it.

Challenge

You can use Kube-ops-view or just plain kubectl cli to visualize the changes and answer the questions below. In the answers we will provide the CLI commands that will help you check the resposnes. Remember: to get the url of kube-ops-view you can run the following command kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'

Answer the following questions. You can expand each question to get a detailed answer and validate your understanding.

1) How would you Scale the inflate-team1 deployment to 4 replicas ?

Click here to show the answer

2) Which Provisioner did Karpenter use ? Which Nodes where selected ?

Click here to show the answer

2) Why did Karpenter split the creation into two different nodes instead of bin-packing like in the previous scenarios ?

Click here to show the answer

3) Scale both deployments to 0 replicas ?

Click here to show the answer

What Have we learned in this section :

In this section we have learned:

  • Applications requiring specific labels or Taints can make use of alternative Provisioners that are customized for that set of applications. This is a common setup for large clusters.

  • Pods can select the Provisioner by setting a nodeSelector with the lable karpenter.sh/provisioner-name pointing to the right Provisioner.

  • Karpenter supports topologySpreadConstraints. Topology Spread constraints instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. In this scenario we discover how to balance pods across Availability Zones.

  • For AL2, Ubuntu and Bottlerocket AMI’s Karpenter does the heavy-lifting of managing the underlying Launch Templates keeping AMI’s up to dates. Karpenter also allows us to configure extra bootstrapping parameters without us having to manage Launch Templates, this significanlty simplifies the life-cycle management and patching of EC2 Instances while removing the heavy-lifting required to apply bootstrapping additional parameters.