Or microk8s. I'm curious what it is about k8s that is sucking up all these resources. Surely the control plane is mostly idle when you aren't doing things with it?
There are 3 components to "the control plane" and realistically only one of them is what you meant by idle. The Node-local kubelet (that reports in the state of affairs and asks if there is any work) is a constantly active thing, as one would expect from such a polling setup. The etcd, or it's replacement, is constantly(?) firing off watch notifications or reconciliation notifications based on the inputs from the aforementioned kubelet updates. Only the actual kube-apiserver is conceptually idle as I'm not aware of any compute that it, itself, does only in response to requests made of it
Put another way, in my experience running clusters, in $(ps auwx) or its $(top) friend always show etcd or sqlite generating all of the "WHAT are you doing?!" and those also represent the actual risk to running kubernetes since the apiserver is mostly stateless[1]
1: but holy cow watch out for mTLS because cert expiry will ruin your day across all of the components
I've noticed that etcd seems to do an awful lot of disk writes, even on an "idle" cluster. Nothing is changing. What is it actually doing with all those writes?
Almost certainly it's the propagation of the kubelet checkins rippling through etcd's accounting system[1]. Every time these discussions come up I'm always left wondering "I wonder if Valkey would behave the same?" or Consul (back when it was sanely licensed). But I am now convinced after 31 releases that the pluggable KV ship has sailed and they're just not interested. I, similarly, am not yet curious enough to pull a k0s and fork it just to find out
1: related, if you haven't ever tried to run a cluster bigger than about 450 Nodes that's actually the whole reason kube-apiserver --etcd-servers-overrides exists because the torrent of Node status updates will knock over the primary etcd so one has to offload /events into its own etcd