EKS "Review"
NOTE: those are just early impressions on a very promising product that I am very happy to see available and I am sure it will get better over time compared to what it is today.
Today I spent some time playing with EKS (Elastic Container Service for Kubernetes), the newly announced Kubernetes service from AWS and I decided to write some notes down, not only for my team, but for whoever is interested in knowing more details on EKS.
I started by playing with the official tutorial that can be found here. I was right from the beginning pretty surprised by the lack of automation of the whole process: the UI or CLI only help you getting a very basic control plane up and running and nothing more while leaving a lot of steps to be executed manually.
During the time I was waiting for the cluster to be ready, I didn’t get so much feedback on what was going on. In general, I’d love to see AWS give a shot at a more user friendly experience: what I mean is, for example, to put a link to the documentation or tutorial directly in the UI such that users can jump to interesting material while the cluster is still creating. In the end, EKS was not only built for people that already know Kubernetes and how to operate it, but for the other AWS users out there that are interested in containers and container orchestration and that don’t want to operate a complex system like Kubernetes.
The getting started guide is unfortunately not super friendly from that point of view, but there are many other tutorials on the web that can get you up and running with the basics of Kubernetes.
Something that surprised me as well is that there is no button to download the kubeconfig
. This is the configuration file needed for kubectl
(which i pronounce koob-cee-tee-el because I can’t unlearn this way of pronouncing it) to connect to the cluster, which contains the name of the clusters and other information.
This file would be pretty easy to generate based on the information that are available in the UI and it would really be trivial to make it available for download such that we could easily get started. In a similar way, it is not exactly super easy to get started given that we have to download kubectl
and Heptio’s authenticator. As a comparison, GKE on Google Cloud Platform, has a way to easily get both kubectl
and the relevant configuration via the official gcloud
tool which is indeed very handy.
In my case, the cluster creation took quite a while, around 15 minutes. That’s much more than I was used to on AWS with tools like kops which takes around 5 minutes to get a cluster up and running with very similar characteristics. It’s true though that this is not an operation that we are going to be doing really often, so it doesn’t really make a huge difference.
After the cluster creation, what we really get is a Highly Available control plane that we don’t see and can’t modify. This is in fact hosted somewhere in the AWS infrastructure, but we have no access to it. What is needed to be done to finish the cluster setup is to create worker nodes to attach to the cluster to actually run some applications. In order to do that, AWS provides a CloudFormation template that allows to spin up nodes easily. The output of the CloudFormation template is an AWS IAM Role that needs to be put in a ConfigMap. Why do we need that?
Well, the nodes need to somehow join the cluster and EKS is using the IAM identity and Heptio’s AWS authenticator to allow the nodes to do that. That’s very AWS native and, in my opinion, quite nice compared to other setups that use shared tokens or just certificates to do that.
Following all those steps, we finally have a working cluster. Was this painful? A bit, especially given that there are plenty of tools out there that made the experience of creating Kubernetes clusters really easy.
In that regards, I tried as well eksctl from the folks at WeaveWorks and the whole experience was so much better!
You can easily create a cluster with:
eksctl create cluster
While the whole process is still not super fast, you will get a cluster in one simple command and this includes the worker nodes. In my case, the command failed to setup a kubeconfig
because it tried to use my local kubectl which has a weird version (because it’s a version I compiled by hand, but that’s a story for another time). The rest though, worked pretty smoothly.
The setup we get once we have an EKS cluster running is really simple. Here a list of pods that were running in my cluster (all in the kube-system
namespace):
NAME READY STATUS RESTARTS AGE
aws-node-6vrp2 1/1 Running 1 6h
aws-node-k2dc2 1/1 Running 1 6h
kube-dns-7cc87d595-cj9js 3/3 Running 0 6h
kube-proxy-8nccp 1/1 Running 0 6h
kube-proxy-lshj8 1/1 Running 0 6h
This is very minimal and has very little overhead, but it is also not a really complete setup. This is in fact lacking support for Ingress and does not have anything in place to setup IAM roles for the pods. This is a bit surprising as AWS talked multiple time about AWS IAM in Kubernetes and I assumed that they would start with Kube2IAM and then improve the setup with something different.
The decision for AWS IAM was probably to let the user roll their own solution from the ones that are available as opensource projects while leaving the door open to develop a better solution together with the community. AWS is already working on it and some details of a possible future implementation are available in this Google document that was discussed in the last sig-aws meeting.
Apart from some rough edges, I’m happy to finally be able to use EKS and I’m looking forward to see it available in other regions. From my experience using Kubernetes since version 1.0, it is not super trivial to operate the control plane, especially at scale and with an HA setup and that’s why such a proposition from AWS was so requested from AWS customers and hopefully we will go towards a more managed solution in which managing the cluster will just be a detail that we will not have to care about.