Evolution of kube
clux February 28, 2021 [software] #rust #kubernetesAfter a quarter year of extensive improvements to kube, it's time to take a birds-eye view of what we got, and showcase some of the recent improvements. After all, it's been about 40 kube releases, one major version of tokio, one extremely prolific new contributor, and one kubecon talk since my (very outdated) last blog post.
Crates
As of 0.51.0
, With modules and crates now delineated better, there's now multiple crates in the repository:
- kube:
Api
types, andClient
library with from aConfig
- kube-derive: proc-macro to derive
CustomResource
necessities from a struct - kube-runtime: stream based controller runtime
Today, we will focus on kube
.
kube::Api
Let's start with the basic feature you'd expect from a client library, the Api.
Its goals:
- allow interaction with any kubernetes resource
- stay compatible with any struct from k8s-openapi
- defer IO handling to the injected
Client
- map
Client
output throughserde
We make this generic by making two assumptions about kubernetes:
- all objects have metadata from apimachinery/meta/types
client-go
api generation is generics in disguise (generated api)
We won't cover this now, but you can watch the talk from KubeCon2020: The Hidden Generics in Kubernetes' API (or read the slides).
Our Api
has been remarkably stable over the past year, despite the internals being restructured heavily.
One improvement is to the ergonomics of patching, which now has a typed Patch enum for selecting the patch type.
Despite full support, we always advocate for server-side apply everywhere as a lot of the awkward issues with local patching are generally swept under the rug with the clearly superior patch mode present in newer versions of kubernetes.
Here's how patching looks today:
async
The biggest Api
improvement recently was the inclusion of the most complicated subresources: Api::exec
and Api::attach
, and these actually uses a different protocol; websockets.
Despite the complexities, these details have ended up being generally invisible to the user; you can hit Api::exec like any other method, and you'll get the expected streams you can pipe from and pipe to:
kube master: optional websocket support for exec/attach with @tokio_rs 1.0. Pipe streams to/from k8s pods. pic.twitter.com/WhSFlPmm60
— eirik ᐸ'aᐳ (@sszynrae) January 4, 2021
kube::Config
Not to be confused with the file in your home directory, our Config is actually just the relevant parameters we extract from the kubeconfig file (or cluster evars when in-cluster), to help us create a Client
.
You generally won't need to instantiate any of these though, nor do you need a Config
(as shown above), because Client::try_default will infer the correct one.
Recent updates to stay compatible with the different config variants which kubectl
supports, means we now support stacked kubeconfigs, and multi-document kubeconfigs.
kube::Client
One of the most updated parts of kube
this year, the Client
has undergone significant surgery.
It is now entirely concerned with the protocol, and handles the serialization plumbing between the Api
and the apiserver.
Many improvements are only really appreciable internally;
-
watch buffering is now using a tokio codec to give us a much more readable streaming event parser, while still returning a
TryStream<Item = Result<WatchEvent<T>>>
forApi::watch
-
Client::connect added to give a way to hyper::upgrade an existing
http
connection (wtf is upgrading?) to grant a streaming websocket connection from the existing http library. This is the secret sauce behindApi::exec
.
Websockets is using tokio-tungstenite
, a dependency so light-weight it's only pulling in tungstenite
without its default-features. Crucially, this lets us avoid having yet another way to specify tls stacks and cause a corresponding explosion of features (weak-dep-features plz).
Of course, supporting multiple protocols, tls stacks, and certs from kubeconfigs means that there's considerable tls handling in kube. Fortunately, we have mostly managed to confine it to one cursed file.
..And if websocket support was not enough:
Every api call now goes through Client::send's new Service, rather than
reqwest::send
, and we no longer depend onreqwest
.
kube::Service
The new Service - injected into the Client
, and constructed from an arbitrary Config
- is what actually deals with the processing of the request call to turn it into a response.
The Service
creates a series of ordered layers to be executed for each request:
- Authentication (extracting tokens, possibly talking to auth providers)
- Url + header mapping from Config to http request
- Dealing with optional compression
- Send it to a
HyperClient
configured with aConnector
The connector for hyper
deals with TLS stack selection + Timeouts + proxying
Why this abstraction? Well, primarily, less entangling business logic with IO (Sans-IO goals), and tower provides a robust way to move in that direction.
There's also code-reuse of common service layers (effectively middleware), as well as the ability to mock services out of the box, something that will help create a better mocking setup down the line.
For now, however, the end result is a more light-weight http client: hyper over reqwest
, and without changing the core api boundaries (inserting Service
between Client
and Config
will not affect the vast majority of users who use Client::try_default
as the main start point).
Credits
If you looked at the contributors graph, you'll see we have a new maintainer.
How did their contributions spike so quickly? Well, check out the prs for tower + hyper rearchitecture, and websocket support. Imagine landing those hugely ambitious beasts so quickly, and still managing to merge 20+ more prs since january 🤯
...So, having mostly project-managered the ship the past two months, it's important to give due credit while we appreciate how far we've come. Huge thanks to kazk.
In fact, from the client capabilities document, we are almost at .
End
TL;DR: A lot has happened. We have dissected parts of kube
.
As a kind of conclusion, I would just like to note how much easier these complex problems are to tackle in rust now thanks to tokio.rs. The ecosystem is fantastic now.
Future Addendum
Some key kube related issues that I personally hope will be resolved in 2021:
- ergonomics / utils
- testing / mocking
- tracing story
- dyn api improvements
- less Options in generated types
Without being able to give any guarantees. Volunteer work, you know.
Speaking of; help is instrumental for moving things forward, and always appreciated. Even if you are just fixing docs / examples or asking questions.
Check our discussions, the tokio discord, or the issues if interested. Take care ✨🤗