There is a large community of folks running kubernetes at home, aptly known as k8s-at-home (https://k8s-at-home.com/). They're a great resource for anyone wanting to get started with k8s at home (and especially on Pi's). They have helm charts for deploying a lot of common apps, a Discord channel, tutorials, etc.
Thanks for this! I recently set up a k8s home cluster for running an older game's dedicated servers. This will he useful!
(For the inevitable "why?": One if the quirks of this old game is you can run multiple server instances networked together to work around lack of multithreading in the code- I'm developing k8s tooling to leverage that to scale performance. https://community.bistudio.com/wiki/Arma_3:_Headless_Client)
Excuse my ignorance, but in what way is this bare-metal? I've always taken that term to mean running without an operating system. My only assumption is because it's not run in the cloud, but I figure that would be a given since "Raspberry Pi" is in the name.
Many, if not most, Kubernetes systems are all running in the cloud in virtual machines or managed containers. Here, Kubernetes is running on the Pi itself, with no hypervisors or time sharing to chop away at performance.
That's not to say that running k8s on bare metal isn't something that's done. It's more difficult, because you need to do a lot of configuration and verification of best practices yourself, but it can easily become the cheaper option if you have some scaling requirements but do not require the infinite scaling possibilities of the cloud. The entire industry seems to be moving back and forth between cloud and bate metal every few years and I'm not really sure in what part of this cycle we are right now (I think more and more companies are in the process of going back to bare metal? The move in the other direction could've already started, I'm not sure.)
Technically you could set up a distributed hypervisor cluster consisting solely of networked Raspberry Pis, but I doubt you'd have many interested customers. So yes, "bare metal" is probably the norm for these computers. It's not for Kubernetes, though, and that's what makes this different from your typical deployment.
This is "bare-metal" in the sense of "not virtualized", meaning the host operating system is not running in a virtual machine. You can see this distinction in cloud environments too, for instance most AWS EC2 machine types are virtualized, but AWS also offers "bare metal" instance types that provide direct access to a physical machine.
I understand "bare-metal" to mean without an operating system. More recently, the definition has confusingly expanded to sometimes include an operating system, but without a hypervisor.
This is a tutorial on installing Ubuntu, then k3s, then other software. What exactly is "bare-metal" about this?? :)
In this context, this means running K8s nodes directly on the hardware.
As opposed to running the nodes as virtual machines. Normally VMs are used in the context of cloud providers but it's not uncommon (with beefier hardware) to run k8s as VMs in datacenters. Deployments on top of OpenStack, Azure Stack or Anthos are common. As is ESXi. It's another abstraction layer, but that gives you easier handling of things like storage and, in some cases, networking.
> More recently, the definition has confusingly expanded to sometimes include an operating system, but without a hypervisor.
Bare bones software (in the original sense of the term) interfaces directly with the hardware without abstracting it via an operating system. Much like an operating system would need to do in order to provide that abstraction (albeit you might not need to worry about paging, kernel rings etc with bare bones software).
You see this in plenty of domains: firmware, embedded systems, uEFI, bootloaders, etc.
This used to be the norm too. Old 8-bit personal computers like Commodores didn't run an OS, instead they'd have BASIC run as firmware (though you could get CP/M, GEM and others for a lot of the later generations of 8-bit micros).
Like an embedded system? Where at boot it just jumps to some offset in ROM where your program lives and starts executing. If you want I/O you better bring your own library and/or be willing to set registers.
No, not all embedded systems run an OS. Linux is not an OS, it's a kernel. Ubuntu is an OS. Kubernetes is application software that runs on several operating systems, primarily ones based on the Linux kernel.
Simple. It only runs one what you think of as a "program".
I've written for bare metal on several platforms. Operating systems exist for a specific use case: when you want to run potentially multiple programs on the same piece of hardware, over time (not nec. concurrently), and you don't want your software to have to think about how to interface with the hardware. That's what operating systems do. There are plenty of cases when you don't need, or want, an OS.
Ubuntu is installed on bare metal, isn't it? :-) But on a more serious note, "bare metal" in the context of K8S means merely that K8S is not pre-installed by a cloud provider for you, and there are no external services (such as storage or load balancing) available out of the box.
This is awesome! 100% I will do a similar project in the future, so saved this article for reference. Well written.
Quick question, if for example you decided to another RPi to the cluster, how easy do you think it would be? Just attach it and connect to the network?
One disadvantage of k3s, is that it does not have HA control plane out of box (specifically, users are expected to bring their own HA database solution[1]). Without that, losing the single point of failure control plane node is going to give you a very bad day.
I use kubespray[2] to manage my raspberry pi based k8s homelab, and replacing any nodes, including HA control plane nodes, is as easy as swapping the board and executing an ansible playbook. The downsides of this, are that it requires the users to have more knowledge about operating k8s, and a single ansible playbook run takes 30-40 minutes...
Thanks for the info. I haven't been following k3s development after I decided to switch to kubespray. Glad that this concern has been addressed. Nice work!
I like k3s, a lot, and this is not an endorsement of MicroK8S over k3s. I found it quite easy to burn an SD card with the latest Ubuntu Server image for RPi, and install microk8s. Yes, it has the snapd stuff that seemingly nobody likes. However, this quick experiment of mine has been running for nearly two years and I haven't felt compelled to change it. I've been through 1.18 to 1.21 of k8s upgrades. Also, while at first the plugins annoyed me, I let it go and found it easy to add metallb and other necessities through the provided plugins.
Wow this is almost exactly like my setup :D one thing I noticed was much better performance after I switched from booting/running off an SD card to a decent flash drive. Nice write up!
I've been running something similar on my three Raspberry Pi 4 with microk8s and flux [1]. Flux is great for a homelab environment because I can fearlessly destroy my cluster and install my services on a fresh one with just a few commands.
Next on my list is set up a service mesh like istio and try inter-cluster networking between my cloud cluster and home Raspberry Pi cluster. Perhaps I can save some money on my cloud cluster by offloading non-essential services to the pi cluster.
I'm also curious about getting a couple more external SSDs and setting up some Ceph storage. Has anyone tried this? How is the performance?
One of my pain points is the interaction of the load balancer (metallb) with the router. It seems to want to assign my cluster an IP from a range, but may choose different ones at different times. Then I have to go update the port-forwarding rules on my router. What solutions do you all use for exposing Kubernetes services to the internet?
> One of my pain points is the interaction of the load balancer (metallb) with the router
That part is incredibly annoying. Wondering about that as well. The ideal solution would involve something like a CSI driver that could talk to the router directly, as is done with cloud provider APIs.
ugh it's great that k3s exists, but frustrating that kube can't hit this target on its own
it seems like small clusters are not economical with vanilla kube. (I say this having frequently tried and failed to do this, but not having done napkin math on system pod budgets to prove it to myself generally). and this gets worse once you try to install any kind of plugins or monitoring tools.
I really wonder if there's a hole in the market for 'manage 5-10 containers with ingress on one or two smallish nodes'. Or if there's a hard core of users of alternatives like swarm mode. this guy https://mrkaran.dev/posts/home-server-nomad/ evolved his home lab from kube to nomad to terraformed pure docker over 3 years.
> it seems like small clusters are not economical with vanilla kube
Why, though? The memory footprint is a couple hundred MB. You do need ideally 3 nodes but you _can_ run in one. I have deployed a single-node MicroK8s without issues.
Usually, the containers themselves (your workloads) are the hogs. Deploying multiple pod replicas in a single machine has innate inefficiencies.
Majority of people using K8s aren't hobbyists. They're enterprising running hundreds if not thousands of nodes. For most of them the offering of K3s is irrelevant. They can spare that extra few hundred megs of RAM needed.
as the former parent of a 100+ node cluster, am mostly with you -- but we also had dev environments that were 1-10 pods, and where we would have liked low-overhead and didn't need HA
also valuable for creating local copies of cloud infra so you can develop + test
I did the same in the last few days, although I run everything on a single-node k3s cluster.
There are some caveats when trying this that one should be aware of. For example there are some bugs with the iptables binary that result in megabytes of duplicate iptables rules after a few days, slowing everything down to a crawl (k3s issue 3117)
Also, lots of software on Dockerhub etc. still don't supply arm64 binaries, which means you will have to build your own containers for them if you want to use them.
Other than that, RaspberryPi 4's make excellent Kubernetes nodes (at least if you get the 8GB version, since even though k3s is reportedly a lightweight Kubernetes alternative, it still needs a couple hundred megabytes of RAM just for the k3s-server binary)
I like the trays. Back when I rebuilt mine from Pi 1Bs to 2Bs* I printed a vertical holder where the Pi itself was the slottable unit, but if I were to rebuild mine today I'd print some trays and use an existing box.
One thing that I did that might make sense for that setup was that my master has a Wi-Fi dongle and performs NAT/DHCP for the other Pis - this makes it easier to do a lot of stuff, including auto-configuration.
Awesome project! I hope in the near future, small to mid size dev teams can use architecture like this to run ephemeral dev environments locally and save tons of money by moving off the cloud. At the end of the day, EKS and GKE require too much ops experience when they scale. Raspberry pi based local solutions have the potential to democratize solutions architecture within organisations.
Slightly off-topic, but I’m curious if anyone’s had use for/good luck with deploying a large number of Pis and having them all network boot off the same image? I’ve looked into using this method for controlling projectors for theater shows, but it could also be a neat way to handle a cluster while you’re developing and testing a new image.
If I have my story straight, it was network booted and I believe is source of another piece of software they developed, called Kraken, that helped manage the finicky-ness of a Raspberry Pi cluster.
https://github.com/kraken-hpc/kraken
Kraken is a state engine. In the case of the Pi cluster Kraken provided network images, installed and configured nodes, rebooted nodes, and manipulated their state as needed to keep things running.
Another fun piece of research on Pis out of LANL is that at their altitude, 7000ft, they estimate that on average a Raspberry Pi will crash once every 2 years due to bit flips caused by cosmic rays.
Could you explain a little bit more how cosmic rays move through space and how they cause values stored in hardware to change?
There was an interesting glitch in SM64 where a player was teleported to a new location because of a specific bit being flipped by a cosmic ray. I'm curious to know how the ray flips specific bits (why certain ones and not others), and how the process works electrically.
EDIT: Ah interesting, seems like these particles are like protons coming from supernovae and black holes. They are moving basically at the speed of light, and are much smaller than the electron well where energy is stored in hardware, so that's why they are able to flip exactly one bit when they collide with that one transistor.
These build logs can sometimes make playing with RPis look rather complicated. It doesn't have to be. Here's a guide I wrote for getting productive and running some code on your newly minted cluster (an often forgotten step) - https://alexellisuk.medium.com/walk-through-install-kubernet...