Kpt, packages, YAMLs
A few days ago, Google announced Kpt, a tool “for Kubernetes packaging that uses a standard format to bundle, publish, customize, update, and apply configuration manifests”. I felt the urge to write a few words about the problem space, with no goal of being exhaustive… so here I am.
Kubernetes packaging
The whole Kubernetes ecosystem seems to be obsessed with the “packaging” problem. At first, Helm came out, providing a “Homebrew like” functionality. CNAB is a spec for packaging distributed applications on Kubernetes. And there’s probably more. What matters is that there have been multiple attempts at defining how to package an application or multiple applications. While it is important to have a single way to deploy an application and while reuse across different repositories is definitely useful, often an application is a fluid concept. It grows. People want to reuse parts of the configuration in other apps, but change a million things at the same time.
Well, I think that often a “package” is a cage. The analogy with Homebrew is especially wrong, in my opinion: installing an application on a Desktop is a story, running something on a production system is another one. I have no SLA on how vim runs on my machine. There are normally no customization flags to Homebrew.
On the other side, Helm, CNAB and others work in a totally different space.
I have to confess that I was heavily biased on Helm and others exactly for this reason: they make it look like a helm install
is enough to have core components running on your production system. The reality is much more complicated and depends mostly on where you are deploying, what your availability requirements are, what you are using charts/packages for.
The issue I have with Helm charts is that they hide the generated Kubernetes YAMLs but not completely. Helm install will talk directly to the cluster, but if you have problems with the cluster you will have to kubectl your way through it. This to say that Helm doesn’t build an abstraction and as such exposes the entire complexity of Kubernetes while providing the false sense of “it’s easy to get X up and running”. No it’s not and for a good reason.
Of course there are more shortcomings of using something like Helm: we complicate the ecosystem. Every time a tutorial starts with helm install something
, it requires every user to install, learn and understand Helm. I see this as a problem, because instead of simplifying the procedures to get something up and running, we introduce additional tools which are complexity per se. If we believe we need those tools because Kubernetes doesn’t do certain things we should probably try to understand why Kubernetes doesn’t support those features and if there is anything that we can do to contribute them in the core project.
Or to build something completely different on top of it. After all, Kubernetes is a platform for building platforms, isn’t it?
Manifests and Kustomize
I’m obsessed about making things as simple as they can be. Or maybe I’m just obsessed by exactly the contrary: complexity. Any additional tool introduces something that the users need to learn and consider when they are operating a system. That is cognitive load on the path to understand what will happen when they do X. Complexity, by definition.
In that regard, I often praised Kustomize. It allows us to start with non templated resources that are valid and can be independently used, customize them and render them back as modified resources. While the tool has a lot of features and it’s by no means the definition of simplicity, it has clear inputs and outputs: Kubernetes resources go in, Kubernetes resources go out. No weird templated things, nothing new. Moreover, it keeps the contract that the users have with Kubernetes still valid: the Kubernetes API (and its resources), nothing more.
Back to Kpt
It’s unclear if we are going to benefit from having yet another tool. The space is crowded, very crowded: there are at least 122 other tools for application management on Kubernetes and this does not even count all of the internal tools that companies have developed and that are closed source.
As Brian Grant says, at this point, it couldn’t hurt and I agree. It could teach us things and inspire others. But I still believe that those kinds of tools are very tied to the internal structure of the organization adopting them and the state in which an organization is with the development of home grown workflows to operate applications on Kubernetes.
I can’t help but see this as a missing opportunity to improve kubectl apply
and kubectl wait
. Those two basic commands that everyone uses lack basic functionalities and we keep building them somewhere else, many many times.
What’s helpful then?
I’m happy to see that with kpt
we are talking about building blocks and workflows. There is still no one tool fits all and there will probably never be. Even committing to a tool or another makes very little sense. What’s important, IMO, is that:
- tools should be composable: you don’t want to bet everything on a single tool as tools come and go.
- the steps that make your workflow matters: you want to have clear building blocks, for example “rendering manifest”, “rollout”, “rollback” and so on. How those are implemented, is really left to your creativity.
- identifying what you mean by an application and its dependencies and how they are mapped to your system is very important: the “application” is a concept that does not exist in Kubernetes.
- raw resources are still the way to go: there is no one abstraction out there and as such hiding the complexity of Kubernetes’ resources can complicate things rather than simplifying them.
- building sugar/helpers/whatever for you and/or for your organization is a good idea. Even if it’s the third time you write the same code.
That’s it, I have many other things to write, but it’s quarantine times and my tech energies are low :-)