Kubernetes sidecars

We often hear about sidecars in the context of Kubernetes pods. Kubernetes pods can contain multiple containers that will be guaranteed to run on the same machine, sharing the local network.

A popular pattern is the “sidecar pattern”. A main container is the application that we intend to run, but more containers are run together with it as part of the same pod. Those other containers are called “sidecars” because they provide additional functionalities that complement the main application. Some examples I’ve seen in the wild are:

Those use cases seem perfect: sidecars allow to add functionalities to the application without modifying the application itself and to scale those naturally, which seems perfect for the agent use case.

But there are a number of downsides that must be considered.

There is no such thing as a sidecar

Sidecar containers were mentioned already in a blogpost on the Kubernetes blog from 2015. As it was said already, they have been a widespread pattern for years, but Kubernetes knows nothing about them. In fact, for Kubernetes, all containers in a pod are considered equal and there is no concept of a main container and a sidecar.

The Kubernetes community is working on a feature to enhance sidecar containers that is still being discussed and that will be available not earlier than Kubernetes 1.20, scheduled for a late 2020 release.

The proposal as it is today will allow to:

Those are great additions that will solve problems for jobs and race conditions. When those features will ship, it will basically mean that Kubernetes will start to have real sidecars, but it won’t solve all the problems with sidecars.

The problems with sidecars

If you care about reliability (and I bet you do), you should consider how having more containers in a single pod can affect the reliability of the whole pod. In fact, all containers in a pod share the same failure domain: given the way the readiness is computed, if a single container in a pod is not ready then the whole pod is not ready.

Container readiness is a combination of the container being running and a successful readiness probe. If, for any reason, a sidecar container can’t run, then the whole pod is not ready. The same applies to the readiness probe, if you have any: if the readiness probe of a pod is not successful, then the whole pod is not ready.

What does this mean for us? It means that the sidecars that are “just adding functionality” could directly impact the readiness (and ultimately the availability, from a user perspective) of the application.

Also, in case of dynamic injection of containers, sidecars will likely lead to duplicating the same container everywhere, increasing overhead and reducing the possibility to optimize consumption centrally (without a redeploy).

Those points above are not everything. My friend Sandor has a one tweet opinion about sidecars that you should read.

With those considerations in mind, in my opinion, sidecars are less attractive than they look like when looked naively.

So… are sidecars considered harmful?

If you read “considered harmful” in an article title, it’s probably clickbait. And no, sidecars are not harmful or evil. You should consider the tradeoffs of what a multi container pod setup can offer and think if this is something good or if you can run without it. And remember that nothing is free: more containers, more problems.