This certainly looks like a pleasingly straight-forward way to spin up k8s.
I do notice that this deploys onto their cloud offering, which we've (https://lithus.eu) found to be a little shaky in a few places. We deploy clients onto their bare metal line-up which we find to be pretty rock solid. The worst that typically happens is the scheduled restart of an upstream router, which we mitigate via multi-AZ deployments.
That being said, there is a base cluster size under which a custom bare-metal deployment isn't really viable in terms of economics/effort. So I'll definitely keep an eye on this.
Probably the easiest out there is https://github.com/vitobotta/hetzner-k3s. There are many options, depending on how low level you want to go. Hetzner terraform project is probably the most complex and complete, but it takes time to configure all those. The main idea was to provide simplification, not just to Kubernetes provisioning in Hetzner, but also to the most common apps and tools that extend Kubernetes capabilities, like ingress controllers, prometheus, elasticsearch, databases and so on.
I agree, this is probably the most complete solution out there. My intentions with this project are to provide various layers of abstraction, not only for Kubernetes provisioning, but also for the most common apps and tools that are usually extending the Kubernetes capabilities and also allow some low level configuration options.
I cant seem to figure out where this company is located and if it is a scam or not. Website has no imprint, no contact address. There is one email address in the privacy statement but it is "redacted by cloudflare". Also in privacy statement it says "Edka Digital S.L." but no idea which country it is registered it.
For me it does not pass the smell test. No physical address, no idea who is running it, no idea if company is indeed registered or not. The pricing FAQ at least talks about VAT and I assume it is EU VAT but could be anything.
Hello there, as I mentioned in the post, I build this as a side project by my self and I'm running it as a freelancer registered in Spain, you can check my VAT number ESY1848661G. I was planning to collect some feedback and honestly didn't expect such interest in the project. I will make the necessary adjustments to the privacy policy and terms of service. When I started this, I had in mind to convert it into a company, but I'm still running it as a freelancer. Thanks for your feedback! I will correct my mistake.
Hey, thanks for your immediate reply. Congrats for starting your own business. If you're Spanish-based maybe something like "aviso legal" at [1] or "legal notice" (imprint) at hetzner [2] is needed so people can validate that you/your company actually exist.
I'm not familiar with Spanish S.L. (Sociedad Limitada) but it seems to be a private, share-based legal entity with minimum 3000 EUR share capital and at least one director. It seems the share capital does not need to be paid in full [3] which is a risk for potential customers if things go wrong.
If you're based in a EU country I'd suggest to also clearly communicate all these legal information, because it's easier for potential customers to build trust into your services.
I have yet to see a guide to automate k8s on Hetzner's beefy bare metal instances. True, you want cattle, but being able to include some bare metal instance with amazing CPUs and memory would be great, and I do just that. My clusters include both cloud and bare metal instances. In the past I had used Hetzner virtual switch to create a shared L2 network between cloud and bare metal nodes. Now I just use tailscale.
But the TF and other tools are using the API to add and kill nodes, if you could pass a class of nodes to those tools that they know can't create but are able to wipe and rebuild, this would be ideal.
We considered reaching out in May, but held back because we want to run on bare metal.
Any chance to get this provisioned on bare metal at Hetzner?
We have K8S running on bare metal there. It's a slog to get it all working, but for our use case, having a dedicated 10G LAN between nodes (and a bare metal Cassandra cluster in the same rack) makes a big difference in performance.
Also, from a cost perspective. We run AX41-NVMe dedicated servers that cost us about EUR 64 per server with a 10G LAN, all in the same rack. Getting the same horsepower using Cloud instances I guess would be a CCX43, which costs almost double.
We're setting up a data-heavy client at the moment who has a similar need. We're working with Hetzner's custom solutions team to provision a multi-AZ setup, with 25G networking and 100G AZ interconnects. Link in bio if you want to chat, email is adam@...
Are you asking if it can provision bare metal servers with hetzner in a similar way to what it is doing with cloud servers, or if it can manage clusters on your hetzner bare metal servers (in the case of the second, a tool like Rancher might be better)
That might be a bit challenging unless they sort out an integration directly with hetzner as I don't think their API supports anything related to bare metal provisioning, just cloud and 'storage boxes'
A bit off topic, but you might want to rethink the name. It is very close to EDEKA, the largest German supermarket chain. They have a very large IT division (https://it.edeka) and judging from the name of your project I was expecting it to be one of their projects.
Well, I had this since 2011, and in 2018 a new disease was labeled EDKA ( that is the first result you get when you google for edka). I became aware about the german supermarket few years after also. I could consider it at some point, but is very hard to find something available these days...
1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?
2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?
Thanks for your questions!
1) The platform provides a control plane to help you deploy the cluster on your own Hetzner account, so you are in control of resources and pay direct usage costs to Hetzner.
2) Because you have full access to kubernetes cluster and it runs on your own Hetzner account, the security of the cluster is a shared responsibility and you can fine tune the configuration according to your requirements. The platform security is totally our responsibility. We try to follow best practices and internal penetration tests were conducted, but we're still in beta and try to see if there's interest for such product before launching the stable version.
Sorry for that, I wasn't expecting such interest. There are still undocumented parts, but happy to answer any question. It uses https://github.com/hetznercloud/csi-driver to attach persistent volumes to PostgreSQL pods.
If you are looking for Postgres on Hetzner, you may want to check out Ubicloud.
We host in various bare metal providers, including Hetzner. (I am the lead engineer building Ubicloud PostgreSQL, so if you have questions I can answer them)
What are the connectivity options between heztner dedicated servers? I see they allow you to pay to have in a single rack, with a dedicated switch. But does that introduce a risk of single point of failure in the rack power or switch?
This is incredibly timely. I've been an AWS customer for 10+ years and have been having a tough time with them lately. Looking at potentially moving off and considering options.
My theory is that with terraform and a container based infra, that it should be pretty easier with Claude Code to migrate wherever.
This is exactly what we [1] do! We migrate clients out of AWS and into Hetzner bare-metal k8s clusters, and then we also become the client's DevOps team (typically for a lot less than Amazon charges)
I will say that there is a fair bit of lifting required to spin up a k8s cluster on bare metal, particularly for things such as monitoring and distributed block storage (we use OpenEBS). I would ballpark it as a small number of months.
It is likely easier on their cloud offering, but we've found that to be a little less reliable than we would hope.
I tried to deploy a small cluster in the US VA region, but the cluster status kept flipping between Failed and Creating with no clear way of troubleshooting it: 7ad975fb-3c8e-47a9-b03d-9e6bec81f0db
Thanks for the feedback! Didn't plan to bring any confusion with that. The AWS KMS is used by the platform to encrypt/decrypt sensitive data before/after storing it in Vault and is part of the tech stack used to develop the platform.
I wonder how long before Hetzner adds something like managed Kubernetes to their native product line. They already have S3 compatible object storages, load balancers and more.
No idea about the timing but I imagine it's coming.
Would make a lot of sense, especially if you can combine it with the hardware servers. You could get a lot of grunt in your cluster for a lot less than for example AWS.
Thanks for the feedback! The platform is mostly self service, but it is very easy to upgrade the Kubernetes version, just change the version in the cluster configuration. For OS updates, you can replace the nodes and it will automatically pick the latest OS image from Hetzner. I also run it isolated for some small companies, as a fully managed service, so that option is available as well.
What is the threat model you want to mitigate using encryption at rest? Is it that a physical disk is not properly wiped after usage? Then you could just use luks and store the key anywhere else, e.g. another machine or an external volume…
Setup dropbear, and have another encrypted instance that runs a cron that runs a script every minute to check for the dropbear port on all instances and sshes in and passes the key to boot.
This is what I do for fastcomments anyway for ovh and hetzner
Congrats on shipping! I see that you have WordPress as a pro app. As someone who pays for WP hosting, what I'd like to see there is the ability to "fork" a WP instance, media, DB, everything, with a new hostname, that I can try things, updates, etc.
k3s does support running separate control plane and worker node pools. It's not just for toy-project clusters, or single node clusters. k3s can also power rather big clusters.
I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
> I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
This was designed for Hetzner, which I still believe has the best offer on the market comparing price, performance and stability. On top of that, the platform offers some ready to deploy add-ons that simplify the configuration after the initial cluster provisioning.
off topic: k8s aside, what are people using to receive webhooks from github/gitea/gitlab and do builds/deploys? is the generally accepted way to put deploy credentials into CI secrets and do it that way?
I can't find this Spanish (?) company in the company register and there are none of the legally required information on the website. Not very trustworthy for a SaaS that stores your data and access keys. I'm confident that this is only a startup "day one" issue, but in times of increased scam and extortion can I be sure? Nope.
Hello there! Fair enough. As I mentioned in the original post, I built this as a side project, by myself, and I run it as a freelancer registered in Spain. It is not hard to find my public profile. You can check my Spanish VAT number, ESY1848661G. This is still in beta and currently looking to collect feedback and see if there is any interest in the market, before scaling it to a company. Thank you!
I do notice that this deploys onto their cloud offering, which we've (https://lithus.eu) found to be a little shaky in a few places. We deploy clients onto their bare metal line-up which we find to be pretty rock solid. The worst that typically happens is the scheduled restart of an upstream router, which we mitigate via multi-AZ deployments.
That being said, there is a base cluster size under which a custom bare-metal deployment isn't really viable in terms of economics/effort. So I'll definitely keep an eye on this.
[1] https://www.talos.dev/v1.10/talos-guides/install/cloud-platf...
It's not the smoothest thing I've ever used, but it's all self hosted and everything can be fixed with some Terraform or SSH.
Great to see some managed Kubernetes on Hetzner!
I'm using it right now
kube-hetzner seems to be a bit stuck, they have a big backlog for the next major release, but it might never happen.
For me it does not pass the smell test. No physical address, no idea who is running it, no idea if company is indeed registered or not. The pricing FAQ at least talks about VAT and I assume it is EU VAT but could be anything.
I'm not familiar with Spanish S.L. (Sociedad Limitada) but it seems to be a private, share-based legal entity with minimum 3000 EUR share capital and at least one director. It seems the share capital does not need to be paid in full [3] which is a risk for potential customers if things go wrong.
If you're based in a EU country I'd suggest to also clearly communicate all these legal information, because it's easier for potential customers to build trust into your services.
[1] https://www.hola.com/aviso-legal/ [2] https://www.hetzner.com/legal/legal-notice/ [3] https://www.lawants.com/en/sl-spain/#:~:text=minimum%20share...
But the TF and other tools are using the API to add and kill nodes, if you could pass a class of nodes to those tools that they know can't create but are able to wipe and rebuild, this would be ideal.
We considered reaching out in May, but held back because we want to run on bare metal.
Any chance to get this provisioned on bare metal at Hetzner?
We have K8S running on bare metal there. It's a slog to get it all working, but for our use case, having a dedicated 10G LAN between nodes (and a bare metal Cassandra cluster in the same rack) makes a big difference in performance.
Also, from a cost perspective. We run AX41-NVMe dedicated servers that cost us about EUR 64 per server with a 10G LAN, all in the same rack. Getting the same horsepower using Cloud instances I guess would be a CCX43, which costs almost double.
I haven't really thought it through yet, whether that even makes sense.
1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?
2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?
We host in various bare metal providers, including Hetzner. (I am the lead engineer building Ubicloud PostgreSQL, so if you have questions I can answer them)
My theory is that with terraform and a container based infra, that it should be pretty easier with Claude Code to migrate wherever.
I will say that there is a fair bit of lifting required to spin up a k8s cluster on bare metal, particularly for things such as monitoring and distributed block storage (we use OpenEBS). I would ballpark it as a small number of months.
It is likely easier on their cloud offering, but we've found that to be a little less reliable than we would hope.
Happy to chat more: adam@...
[1] https://lithus.eu
Would make a lot of sense, especially if you can combine it with the hardware servers. You could get a lot of grunt in your cluster for a lot less than for example AWS.
From there it was much easier just using it for whatever I wanted, including K3S
Setup dropbear, and have another encrypted instance that runs a cron that runs a script every minute to check for the dropbear port on all instances and sshes in and passes the key to boot.
This is what I do for fastcomments anyway for ovh and hetzner
I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
Count how many GKE ad EKS users are out there?
Is there are plans to support Gitlab and gitlab registry (or any registry) ?
Triple. 1 and 2 nodes will give failure allowance of zero.