I noticed modules support is enabled in your kernel, and was hoping to activate at least Wireguard and GRE for some experimentation. I thought I could achieve this by rebuilding the kernel modules myself and shipping them in the image, but this raises the question of where the kernel binary and config comes from. I could not find it in GitHub. Is it currently private, or perhaps some off-the-shelf kernel from someplace? There is a /proc/config.gz which could be used to rebuild it, if the kernel was an unpatched vanilla tree.
I’d like this for two wildly experimental reasons:
-
Running an IPv4 Wireguard connection from within the containers to an existing Kubernetes cluster which cannot be made to speak IPv6 easily (or at all). This would be to allow the containers to reach k8s services without the expense of SSL handshakes over the public Internet or exposing those services publicly at all
-
Running a GRE tunnel over the Wireguard link, to allow the containers to join multicast groups published on the remote end (Wireguard basically doesn’t support multicast yet). In the experimental app, this boils away a lot of complexity with selectively routing live streams to the appropriate containers, and avoids duplicating a bunch of existing infrastructure already present on the remote end.
This is mostly for fun, I guess there is room for breakage if your kernels are silently upgraded, but it also seems like a fairly awesome use case that is only possible due to your hosting model