Hey, we're the spinning-factory team, the folks behind Kloak.
Kloak runs as a Kubernetes controller. It swaps the secrets in your workloads for harmless placeholders we call kloaked secrets, then uses eBPF to substitute the real secrets back in at the last moment — right when your app makes a request to an allowed host.
Today, Kloak works with any app using OpenSSL 3.0–3.5 (statically or dynamically linked) or go-tls (Go 1.25 and 1.26). Support for more TLS libraries (GnuTLS, BoringSSL, and others) and additional Go versions is on the roadmap.
Kloak is open source under the AGPL, contributions are welcome! We are also happy to hear any feedback and answer any question for the HN community.
For security products trust is important. writing your website copy by hand will help you build trust. If the design and content does not look human written it will lower adoption.
Thank you for the feedback! We are currently shorthanded so we relied on AI a lot for writing our docs, we reviewed that doc as much as we could but definitely there is room for improvement. We will try to get better at this.
In the mean time, if you find any discruptency with the docs or anything that we can correct please open an issue and we will get to it ASAP.
Secrets are detected before encryption in the user buffer but rewrites happen post encryption in the kernel buffer to be sent on the wire.
packets boundaries are not an issue because detection happen at the SSL write where we have the full secret in the buffer and its position so we can know at rewrite time that the secret is cross 2 packets and rewrite it in 2 separate operations. We also have to update the TLS session hash at the end to not corrupt the TLS frame.
This is fantastic! I need this. however, for my self-hosted home projects that are containerized but where I don't use Kubernetes, is there a way for me to use a version of Kloak that does the same eBPF magic on docker-compose or LXC/QEMU (Incus) stacks?
It's perfectly fine for you to say non-Kubernetes isn't either your focus or on your 90 day roadmap :)
- My specific usecase is to not need Conjur Secretless Broker (https://github.com/cyberark/secretless-broker) - my understanding of eBPF is entirely superficial but from a 30k ft view, it looks like this can not only replace it but would be a far efficient solution (Conjur would be a user-space proxy while kloak would be at lower levels of abstraction)?
yes please open an issue on https://github.com/spinningfactory/kloak/issues and we can discuss this. I'm not familiar with secretless-broker but we can definitely see if that use case fit with kloak and get into more specifics on how you can help.
Thank you! We appreciate your enthusiasm! :-)
From technology perspective nothing prevent kloak to do rewrite on any workload scheduler or even without a scheduler (native Linux). The main challenge is to find a flow to signal to kloak what to rewrite and how to inject kloaked secrets to the workload.
TBH supporting other technologies is not something we thought about but we can definitely consider if there is an ask for it from the community.
The way we thought about it is from the lense of 2 personas:
- a persona that control the control plain side, what secret to distribute to which user and what hosts they are allowed to send that secret to (probably platform team or secops team)
- a persona that represent the user that need to reach host X with secret Y (probably the dev team)
based on this secret rewrite signal need to be out of band and not part of the request it self or the whole model will fall apart.
We already have the intention to support rewrites for specific headers but those headers are defined by the first persona out of band too.
btw, we support rewrite for postgres protocol for db password.
Awesome project! We need more eBPF projects, and congrats on launching.
Assuming I hijack a production pod, can I not just make an http call to myself with the `kloak:...` secret and get back the real secret? Is there a way to validate destination?
I've heard one way to acheive this is by using api proxies/gateways. you can store secrets in a vault if you wish, but with a proxy, your app makes requests as usual without using secrets, its requests are then intercepted by the proxy to add authentication information transparently.
The added benefit is that you can also manage things like api rate limits, and implement all sorts of cool monitoring and api-specific threat detection centrally. I don't know of a way to do this outside of cloud provider services though.
Architecturally speaking, you have an environment that is at the same level of trust with respect to the data it processes, anything in there is unsecured, but all interactions outside of the system passes through a gateway proxy that manages all of what i mentioned earlier, including secret management.
- send traffic to the proxy (either in a non transparent way or using routes or even ebpf to redirect traffic to the proxy transparently)
- trust the proxy certs or use plain http/TCP to the proxy
With kloak, the app don't need any modification and you avoid a single point of failure (aka egress proxy). Each app has an independent ebpf program attached to it that can survive the control plane going down and don't need to trust any special certs or change the endpoint it sends traffic to.
cool, but the single point of failure (it could be HA-proxy) is the point, it's a choke point. I get both architectures have pros and cons, with the proxy approach you remove secrets from the application environment entirely. Plain HTTP shouldn't be an issue, neither should internal certs whose only point is to allow applications that refuse to work with plain-http to function. I would prefer the best of both worlds, where the proxies are per-node personally.
But not everyone wants to, or can afford to run a proxy for credential management. I started looking into this mostly to regulate API usage, especially burning through tokens when calling LLM apis, the credential benefit only occurred to me afterwards. Great work with it, no idea how the eBPF magic is making it work, I'll have to find out.
Was just talking about this the other day - although more in-line with a custom controller to replace _all_ secrets / env variables used at runtime automatically (LD_PRELOAD get_env ?). Recognize this serves a different use case - I was trying to only decrypt KMS encrypted secrets in-memory / in-flight so that an attacker would have a harder time reading secrets in-cluster or in pod shell.
Such a sick idea, and incredibly useful. Would be nice if it integrated directly with secrets managers RE: ESO
I think it is funny that it's sewer, because a sewer is also a underground way around things, which is a good description of the out of band solution here. So the name checks out.
This is pretty cool, nice project. Can you expand on what threat model this combats?
Also, does the replace op happen only for specific fields in HTTP, or for every matching string in the request? I can imagine the latter if you want to support non-standard authentications methods, though there's always the edge case where the secret string placeholder is not used as a secret and should not be replaced.
The main threat model is application leaking secrets:
- Internet facing app that could potentially be hacked and bad actor exfiltrating secrets
- AI agent that can exfiltrate secrets through prompt injection for example or context poisoning
- The general use case where a secret can be for example inject by mistake in logs for instance
How does this compare with TPUs? Can you not have secrets in the TPU which cannot be accessed directly by apps, solving this threat vector? I get that you want compatibility with popular libraries, but I wonder if the actual solution is to use hardware support to enforce the secret boundaries.
I'm not super familiar with TPUs and Trusted execution environments but my understanding is that it serve a different threat model.
TEE aim to protect a certain workload from the host to avoid another workload on the same host from steeling secrets.
Kloak aim is to protect the secret from the workload itself not the host.
It should work in cloud environments, We tested it on EKS and digital ocean cloud so far, and it works. The kloak controller is deployed as privileged daemonset that have access to the underlying host and can perform eBPF attachment operations on all the pods on that host.
This is not something we support currently. We will need to do some research on ways to support it.
The main hurdle is that we can't rewrite secrets in any of the user buffers as this will defy our threat model and signing is usually done in user space.
Thank you!
Not really, the controller is not doing dataplane per-say, it only pushes eBPF programs to the kernel for the relevant apps/cgroups so that could be considered control-plane. The full data-plane run in eBPF.
It probably doesn't. Google Secrets Manager is a cloud API. This runs in Kubernetes and sits between a Kubernetes Secret object and the application calling for it.
Generally speaking, if you're running Kubernetes in GCP (likely via GKE), and you control how your applications retrieve their secrets, you're likely better off with a combination of Workload Identity Federation, tight IAM to Secrets Manager, and a smart secrets retrieval strategy which likely involves lazy loading secrets and attempting a reload in case of a permission denied so it can deal with secrets rotation.
For applications where that's not an option, the state-of-the-art has been ensuring etcd is actually encrypted (as opposed to the default Base64), and relying on Kubernetes Secrets, usually either mounted in the filesystem or passed to environment variables.
Both these approaches have weaknesses since they're immediately available to all processes in the container.
OP seems to solve that by never exposing the secrets to the application, by sitting between the application and the service and replacing the secret on the wire, outside of the application's reach.
For security products trust is important. writing your website copy by hand will help you build trust. If the design and content does not look human written it will lower adoption.
packets boundaries are not an issue because detection happen at the SSL write where we have the full secret in the buffer and its position so we can know at rewrite time that the secret is cross 2 packets and rewrite it in 2 separate operations. We also have to update the TLS session hash at the end to not corrupt the TLS frame.
It's perfectly fine for you to say non-Kubernetes isn't either your focus or on your 90 day roadmap :)
https://discuss.linuxcontainers.org/t/how-to-best-ask-questi...
- What's the best way to discuss this specific topic with you? As an https://github.com/spinningfactory/kloak/issues or something else?
- My specific usecase is to not need Conjur Secretless Broker (https://github.com/cyberark/secretless-broker) - my understanding of eBPF is entirely superficial but from a 30k ft view, it looks like this can not only replace it but would be a far efficient solution (Conjur would be a user-space proxy while kloak would be at lower levels of abstraction)?
Would it be realistic or reasonable to detect a header like `X-kloak-ENABLED` or specific endpoints in the case of HTTP?
Similar for wire protocols like PostgreSQL or gRPC?
Our would a usermode proxy be easier but not preferred due to overhead?
based on this secret rewrite signal need to be out of band and not part of the request it self or the whole model will fall apart.
We already have the intention to support rewrites for specific headers but those headers are defined by the first persona out of band too.
btw, we support rewrite for postgres protocol for db password.
Assuming I hijack a production pod, can I not just make an http call to myself with the `kloak:...` secret and get back the real secret? Is there a way to validate destination?
The added benefit is that you can also manage things like api rate limits, and implement all sorts of cool monitoring and api-specific threat detection centrally. I don't know of a way to do this outside of cloud provider services though.
Architecturally speaking, you have an environment that is at the same level of trust with respect to the data it processes, anything in there is unsecured, but all interactions outside of the system passes through a gateway proxy that manages all of what i mentioned earlier, including secret management.
- send traffic to the proxy (either in a non transparent way or using routes or even ebpf to redirect traffic to the proxy transparently)
- trust the proxy certs or use plain http/TCP to the proxy
With kloak, the app don't need any modification and you avoid a single point of failure (aka egress proxy). Each app has an independent ebpf program attached to it that can survive the control plane going down and don't need to trust any special certs or change the endpoint it sends traffic to.
But not everyone wants to, or can afford to run a proxy for credential management. I started looking into this mostly to regulate API usage, especially burning through tokens when calling LLM apis, the credential benefit only occurred to me afterwards. Great work with it, no idea how the eBPF magic is making it work, I'll have to find out.
Such a sick idea, and incredibly useful. Would be nice if it integrated directly with secrets managers RE: ESO
https://en.wikipedia.org/wiki/Cloaca_Maxima
Also, does the replace op happen only for specific fields in HTTP, or for every matching string in the request? I can imagine the latter if you want to support non-standard authentications methods, though there's always the edge case where the secret string placeholder is not used as a secret and should not be replaced.
TEE aim to protect a certain workload from the host to avoid another workload on the same host from steeling secrets. Kloak aim is to protect the secret from the workload itself not the host.
The main thing I wonder is how well supported is it in cloud environements? AKS/EKS/etc?
The main hurdle is that we can't rewrite secrets in any of the user buffers as this will defy our threat model and signing is usually done in user space.
2. Code that does injection in eBPF and needs to live along your app.
From my understanding from the README and helm chart, these are both in the daemonset.
Generally speaking, if you're running Kubernetes in GCP (likely via GKE), and you control how your applications retrieve their secrets, you're likely better off with a combination of Workload Identity Federation, tight IAM to Secrets Manager, and a smart secrets retrieval strategy which likely involves lazy loading secrets and attempting a reload in case of a permission denied so it can deal with secrets rotation.
For applications where that's not an option, the state-of-the-art has been ensuring etcd is actually encrypted (as opposed to the default Base64), and relying on Kubernetes Secrets, usually either mounted in the filesystem or passed to environment variables.
Both these approaches have weaknesses since they're immediately available to all processes in the container.
OP seems to solve that by never exposing the secrets to the application, by sitting between the application and the service and replacing the secret on the wire, outside of the application's reach.