Intel SGX/remote attestation for verifying that servers are running the code they say they are running is very interesting, I believe Signal talked about doing something similar for contact discovery, but at a base level it requires a lot of trust. How do I verify that the attestation I receive back is the one of the machine I am contacting? Can I know for sure that this isn't a compromised SGX configuration, since the system has been broken in the past? Furthermore, can I really be sure that I can trust SGX attestations if I can't actually verify SGX itself? Even if the code running under SGX is verifiable, as an ordinary user it's basically impossible to tell if there are bugs that would make it possible to compromise.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
Intel will not attest insecure configurations. Our client will automatically verify the attestation it receives to make sure the certificate isn't expired and has a proper signature under Intel's CA trust.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
I really am interested in how this works. How can the client software verify that the SGX attestation actually is from the same machine as the VPN connection? I guess there's probably an answer here, but I don't know enough about SGX.
The way this works is by generating a private key inside the enclave and having the CPU attest its public key.
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
Intel audits configuration on system launch and verifies it runs something they know safe. That involves CPU, CPU microcode, BIOS version and a few other things (SGX may not work if you don't have the right RAM for example).
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
These VPN's for privacy are so bad. You give your credit card (verified identity), default gateway and payload to foreign soil and feel safe. On top of that your packets clear text metadata verifies you with cryptographic accuracy.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
You're welcome to use cryptocurrencies (we have a page for that), and our system only links your identity at connection time to ensure you have a valid subscription. Your traffic isn't tied to your identity, and you can look at the code to verify that.
Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
> Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
Semi-serious: redeemable codes you can buy at a national retail chain, ostensibly using cash. It has the unfortunate side effect of training people to fall for scams, however. Bonus points if you can somehow make the codes worthless on the black market, I guess.
Someone had a comment here that just disappeared, mentioning it's by Mark Karpelès (yes, the same guy from MtGox) and Andrew Lee. Why did that remark get deleted?
They claim to allow anonymous sign up and payments, but requires an email,an account, zip code and name for Crypto payments, but fake info could be used I guess. I tried ordering via Crypto, but it constantly gives me this error: "Unable to load order information. Try again".
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
> Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
One of the many reasons I love Mullvad (been using it for 4 years now) is their simple pricing—$5/month whether you subscribe monthly, yearly, or even 10 years out.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
How does this attestation work? How can I be sure that this isn't just returning the fingerprint I expect without actually running in an enclave at all? Does Intel sign those messages?
Similar to TLS, the attestation includes a signature and a x509 certificate with a chain of trust to Intel's CA. The whole attestation is certified by Intel to be valid and details such as the enclave fingerprint (MRENCLAVE) are generated by the CPU to be part of the attestation.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
The US government might be able to pressure Intel into doing something with SGX, but there are way too many eyes on this for it to go unnoticed in my opinion, especially considering SGX has been around for so long and messed with by so many security researchers.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
> there are way too many eyes on this for it to go unnoticed
Whose eyes? AFAIU, to hack the SGX trust chain you just need to get a single public key registered in their PKI system. They probably already have soft keys registered in the system, just for development and testing. Much depends on how they've setup up the process, but it might just take 1 or 2 employees in the loop to do this licitly. But in practice there are far easier ways to subvert the system, especially if you're the one writing the code to be attested. It's quite similar to the situation with TLS certificates. Does anybody doubt governments have managed to licitly get a fake certificate signed for a target domain? But has any employee ever come out and clearly stated this? (Maybe they have, but I don't remember ever reading that story.) At the same time, that process is but one of multiple viable options available to government to licitly or illicitly hack a target, and in many cases probably not even worth the paperwork involved.
That said, I think there's definitely value in using SGX, but much depends on the system built around it.
Seems fairly similar, ARM's response to TEE basically. We started with SGX because it is battle tested and has a lot of people still trying to find issues, meaning any issue is likely solved quickly, however we are planning to also evaluate and support other solutions. Information is restricted and cannot leave the enclave unless the code running in there allows it to in both cases.
And once a CPU is attacked with a voltage glitching type attack, the compromise is so complete that the secret seeds burned into the hardware are leaked.
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
As far as I know SGX has no 0-day exploits live today. sgx.fail was the largest collection of attacks and have all been resolved.
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
What does the verifiable program do though? With a VPN, what I'm concerned about is my traffic not being sniffed and analyzed. This code seem to have something to do with keys but it's not clear how that helps...?
This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
> This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
Hard disagree... not only is SGX deprecated and was also removed from recent processors due to security issues IIRC, it still can't prove that your requests are actually being served by the code they say they're running. The machine/keys you get back from their server could be from anywhere and might be completely unrelated.
> A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generation Intel Core processors, but development continues on Intel Xeon for cloud and enterprise use.
Looks neat, but how can you tell the fingerprint the server returns was actually generated by the enclave server, and isn't just a hardcoded response to match the expected signature of the published code (or that you're not talking to compromised box that's simply proxying over a signature from a legit, uncompromised SGX container)?
The enclave fingerprint is generated as part of the attestation.
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
> the connection is guaranteed to be with a host running the software in question
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Let me clarify, it guarantees your connection is being served by the enclave itself. The TLS encryption keys are kept inside the enclave, so whatever data is exchanged with the host, it can only be read from within the secure encrypted enclave.
The attestation guarantees it's the one serving the request, and the encryption/decryption and NAT occurs inside the enclave so it's definitely private.
SGX's original goal of being used for DVD DRMs has been deprecated because it turns out people don't keep their BIOS up to date and didn't all get Intel's latest CPUs, making the use of SGX as a client side feature not useful. Turns out it's a lot more useful server-side, and while it had its share of issues (see sgx.fail) Intel addressed these (also see sgx.fail). It can prove your requests are actually being served by the code due to the way the TLS attestation works.
Not an oversight, one of SGX's features is MRENCLAVE measurement, a hash of the code running inside the enclave that can be compared with the value obtained at build time.
Even assuming SGX is to be trusted and somehow the government would not be able to peek inside (implausible), this "tech" will not solve legal problems... see "tornado cash" for an example. "No logs" is either impossible, illegal or a honeypot.
> Build the code we published, get the fingerprint it produces, ask a VP.NET server for the fingerprint it reports, and compare the two. If they match, the server is running the exact code you inspected. No trust required.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
Intel SGX comes with an attestation process aiming at exactly that. The attestation contains a number of details, such as the hardware configuration (cpu microcode version, BIOS, etc) and the hash of the enclave code. At system startup the CPU gets a certificate from Intel confirming the configuration is known safe, which is used by the CPU to in turn certify the enclave is indeed running code with a given fingerprint.
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
Yes crypto payments, a bit difficult to find since you need to look at the bottom of the page however, but we have some plans to improve that in the coming days.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
As far as I'm aware, no. Any network protocol can be spoofed, with varying degrees of difficulty.
I would love to be wrong.
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
Again, I would love to know if I'm wrong.
The fact that no publicly disclosed threat actor has been identified says nothing.
Because of the cryptographic verifications, the communication cannot be spoofed.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
And worse, it is harder for the American government to eavesdrop on US soil than it is outside America.
Of course, if a national spying apparatus is after you, regardless of the nation, pretty good chance jurisdiction doesn’t matter.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
They could run one secure enclave runningng the legit version of code and one insecure hardware running insecure software.
Then they put a load balancer in front of both.
When people ask for the attestation the LB sends traffic to the secure enclave, so you get the attestation back and all seems good.
When people send vpn traffic the loadbalancer sends them to the insecure hardware with insecure software.
So sgx proves nothing..
Old copy? Might need an update.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
Whose eyes? AFAIU, to hack the SGX trust chain you just need to get a single public key registered in their PKI system. They probably already have soft keys registered in the system, just for development and testing. Much depends on how they've setup up the process, but it might just take 1 or 2 employees in the loop to do this licitly. But in practice there are far easier ways to subvert the system, especially if you're the one writing the code to be attested. It's quite similar to the situation with TLS certificates. Does anybody doubt governments have managed to licitly get a fake certificate signed for a target domain? But has any employee ever come out and clearly stated this? (Maybe they have, but I don't remember ever reading that story.) At the same time, that process is but one of multiple viable options available to government to licitly or illicitly hack a target, and in many cases probably not even worth the paperwork involved.
That said, I think there's definitely value in using SGX, but much depends on the system built around it.
lol
SGX has been broken time and again
SGX has 0-day exploits live in the wild as we speak
so... valiant attempt in terms of your product... but utterly unsuitable foundation
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
https://en.wikipedia.org/wiki/Software_Guard_Extensions
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
It's signed by Intel and thus, guaranteed to come from the enclave!