This is kind of amazing. I'm suspicious that the site operator has absolutely no idea what they're doing.
> DoD Cyber Exchange site is undergoing a TSSL Certification renewal
I'm imagining someone searching around for a consulting or testing company that will help them get a personal TSSL Certification, whatever that is (a quick search suggests that it does not exist, as one would expect). And perhaps they have no idea what TLS is or how any modern WebPKI works, which is extra amazing, since cyber.mil is apparently a government PKI provider (see the top bar).
Of course, the DoD realized that their whole web certificate system was incompatible with ordinary browsers and they wrote a memo (which you have to click past the certificate error to read):
saying that, through February 2024, unclassified DoD sites are permitted to use ordinary commercial CAs.
If the DoD were remotely competent at this sort of thing, they would (a) have CAA records (because their written policy does nothing whatsoever to tell the CA/B-compliant CAs of the world not to issue .mil certificates, (b) run their own intermediate CA that had a signature from a root CA (or was even a root CA itself), and (c) use automatically-renewed short-lived certificates for the actual websites.
cyber.mil currently uses IdenTrust, which claims to be DoD approved. They also, ahem, claim to support ACME:
> In support of the broader CA community, IdenTrust—through HID and the acquisition of ZeroSSL—actively contributes to the development and maintenance of major open-source ACME clients, including Caddy Server and ACME.sh. These efforts help promote accessibility, interoperability, and automation in certificate management.
Err... does that mean that they actually support ACME on their DoD-approved certificates or does that mean that they bought some companies that participate in the ACME ecosystem? (ACME is not amazing except in contrast to what came before and as an exercise in getting something reasonable deployed in a very stodgy ecosystem, but ACME plus a well-designed DNS-01 implementation plus CAA can be very secure.)
The offending certificate is:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
40:01:95:b4:87:b3:a3:a9:12:e0:d7:21:f8:b3:91:61
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=IdenTrust, OU=TrustID Server, CN=TrustID Server CA O1
Validity
Not Before: Mar 20 17:09:07 2025 GMT
Not After : Mar 20 17:08:07 2026 GMT
Subject: C=US, ST=Maryland, L=Fort Meade, O=DEFENSE INFORMATION SYSTEMS AGENCY, CN=public.cyber.mil
There are a few reasons DoD PKI is a shitshow which make it somewhat more understandable (although only somewhat).
First, the issues you describe affect only unclassified public-facing web services, not internal DoD internet services used for actual military operations. DoD has its own CA, the public keys for which are not installed on any OS by default, but anyone can find and install the certs from DISA easily enough. Meaning, the affected sites and services are almost entirely ones not used by members of the military for operational purposes. That approach works for internal DoD sites and services where you can expect people to jump through a couple extra hoops for security, but is not acceptable for the general public who aren't going to figure out how to install custom certs on their machine to deal with untrusted cert errors in their browser. That means most DoD web infra is built around their custom PKI, which makes it inappropriate for hosting public sites. Thus anyone operating a public DoD site is in a weird position where they deviate from DoD general standards but also aren't able to follow commercial standard best practices without getting approval for an exception like the one you linked to. Bureaucratically, that can be a real nightmare to navigate, even for experienced DoD website operators, because you are way off the happy path for DoD web security standards.
Second, many DoD sites need to support mTLS for CAC (DoD-issued smartcards) authentication. That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Third, most of these sites and services run on highly customized, isolated DoD networks that are physically isolated from the internet. There's NIPR (unclassified FOUO), SIPR (classified secret), and JWICS (classified top secret). NIPR can connect to the regular internet, but does so through a limited number of isolated nodes, and SIPR/JWICS are entirely isolated from the public internet. DoD cloud services are often not able to use standard commercial products as a result of the compatibility problems this isolation causes. That puts a heavy burden on the engineers working these problems, because they can't just use whatever standard commercial solutions exist.
Fourth, the DoD has only shifted away from traditional old school on-prem Windows Server hosting for website to cloud-hosting over the past few years. That has required tons of upskilling and retraining for DoD SREs, which has not been happening consistently across the entire enterprise. It also has made it much harder to keep up with the standards in the private sector as support for on-prem has faded, while the assumptions about cloud environments built into many private sector solutions don't hold true for DoD.
Fifth, even with the move to cloud services, the working conditions can be so extraordinarily burdensome and the DoD-specific restrictions so unusual, obscure, poorly documented, and difficult to debug that it dramatically slows down all software development. e.g., engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
Ultimately, the sites this affects are ones which are lower priority for DoD because they are not operationally relevant, and setting up PKI that can easily service both their internal mTLS requirements and compatibility with commercial standards for public-facing sites and services is not totally straightforward. That said, it is an inexcusable shitshow. Having run CAC-authenticated websites, I can tell you it's insane how much dev time is wasting trying to deal with obscure CAC-related problems, which are extremely difficult to deal with for a variety of technical and bureaucratic reasons.
Good write up. Not to mention the other big HR constraints on DoD engineers: they almost always have to be a “US person.”
Anyone who gets a CAC working on a personal computer deals with this all too much. The root certs DoD uses are not part of the public trusted sources that commonly come installed in browsers.
lol I very nearly included a rant about that but decided it was too far off topic. Not being able to smoke weed may be more of an obstacle these days though.
> That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Is this actually all the way technically correct? As far as I know, there is no requirement that the trust chains for server certificates and client certificates are in any way related. It seems to me that it would be perfectly possible for the DoD to use its own entirely private client certificate infrastructure but to still have the server certificate use something resembling an ordinary root certificate.
This is not to say that this would actually be all that worthwhile.
> engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
This hits too close to home. I'm sending you my therapist's bill for this month.
I think you underestimate the number of people who accidentally have their https carts expire. Instead of blaming the people running these systems on why they let it expires, it would be more productive to improve the system to make this less likely to happen.
ACME [1] has been a thing for more than 10 years and has been a stable specification for 7 years. There were similar vendor-specific implementations that preceded it. The DoD has employed none of these solutions for their flagship infosec public web presence. If they were going to automate this then they surely would have done so by now. The reasons why are opaque but people who have experience working in this space might be able to make an educated guess.
Look, when I forget to renew the cert on my Jellyfin server, like 4 people suffer.
When the DoD forgets to renew the cert for their cybersecurity download website AND can't figure what a A TLS cert even is (calling it a "TSSL Certification"), this is an indicator that our military has absolutely zero understanding of the most basic cybersecurity concepts.
If you can't tell the difference between a hobbyist forgetting to renew their Let's Encrypt cert, vs. a trillion-dollar military not even knowing what a certificate is, maybe you should work for our military, because they can't tell the difference either.
> Users on civilian network can continue downloads through the Advance tab in the error message.
They are literally telling users to click through the browser errors about the bad cert. They don't mention that there is a very specific error they should be looking for (expired cert). This gives any MITMer the opportunity right now to replace downloaded executables with malware-laden ones using nothing more than a self-signed cert and a proxy. You can bet your boots China, NK, Iran, Russia are all having a good laugh. Biggest military in the world and they can't get a web server working.
Turns out ”bank-grade security” is not something to strive towards. In the case of TLS certificates, most banks still believe they need EV certs, even though browsers stopped making any visual distinctions for EV certificates around 2018-2019.
Apart from the fact one dev as a test exploited a loophole to make a single sort of convincing EV cert (which could easily be fixed by a policy change), EV certs are still vastly harder to exploit or clone than almost any other certificate. The eventual solution will be an EV cert that isn't named an EV cert so that the CA/B can protect their reputations for claiming they're a bad idea.
The fact the browsers stopped recognizing this is political, not based on any reality of sense. Everyone appeals to authority what the best way to do TLS is, and the problem is the authority is stupid.
As well as having; proper documented (and tested) procedures and appropriate level of staffing/staff availability (not overburdened by juggling too many tasks and projects) - AND... keeping staff over several period/activity cycles, so they have actual experience performing the ongoing maintenance activities required. Oh - and heck, even a master calendar of "events" which need to be acted on, with - ya'know reminders and things...
Yeah - I have almost never seen any corporate or government environment actually take a "forward-thinking" approach to any of the above...
Anyone who thinks this is that trivial has never worked in enterprise IT.
Automated certificate renewal is maybe supported by 10% of services I operate where I work. And we're pretty modern. An organization with more legacy platforms is likely at "nothing supports automated renewal".
We are a decade or two out from 47 day expiry being a sane concept.
Can confirm. Have encountered many on-prem and lift-and-shift solutions with no automated means of updating certs. The worst contenders are usually 1) executables on windows server (version 2012, of course), 2) old, obscure or very outdated database servers and 3) custom hardware firewalls. They are the worst.
To make things easy they usually all use different cert formats as well, requiring you to have an arsenal of conversion scripts ready.
In this case, “custom” means firewalls made by pretty much any of the major vendors.
Cisco, Juniper, Fortinet and Palo Alto have a lot to answer for with their laziness. Cisco and Fortinet added support only recently. Palo and Juniper haven’t bothered at all.
Even plain IIS still doesn't support ACME on Windows Server 2025 without you grabbing some random scripts off the Internet written by people you don't know.
But yeah a lot of Windows server software uses inbuilt web servers with no ability to tweak or tamper beyond what the application exposes in its own settings panel.
Certificate expiration notifications are a checkbox in uptime-kuma, which is itself incredibly easy to install and configure. We're not talking a week, we're talking a matter of minutes to go from zero to receiving notifications 21 days in advance of certificate or domain expiration.
Expiries are a defence-in-depth that exist primarily for crypt hygiene, for example to protect from compromised keys. If the private key material is well protected, the risk is very low.
However, an org (particularaly a .mil) not renewing its TLS certs screams of extreme incompetence (which is exactly what expiries are meant to protect you from.)
Its more or less the same as using expired in person documents.
Instrinsically no, but at the same time its generally easier to source an expired identity document. If its long expired the verification standards might be different. People dont really put as much effort into destroying or revoking expired documents. The info might no longer be accurate.
The only addition in the computer context is it trains users to blindly click through warnings. If this becomes a pattern they will eventually not realize it if the error becomes something more serious.
- TLS certificates do leak, not just due to worst case bugs like heart blead
- revocation does not work well in practice
- affected operators aren't always aware if a certificate leaked
so by having expiration (and in recent years increasingly short validity duration) you reduce how the consequences of an leak, potentially to nothing if the attacker only gets their hand on the cert after it expired
this also has the unintended consequence that a long time expired certificate leaking isn't seen as a security issues, nor will you revoke it (it's already invalid).
But if you visit site with expired certificates you have the problem that you only know it had been valid in the past. You don't know if it was leaked after it became invalid or similar. I.e. you can't reasonable differentiate anymore between "forgotten to renew" and MITM attack. At which point it worth pointing out that MITM attacks aren't just about reading secrets you send, but can also inject malicious JS. And browser sandbox vulnerabilities might be rare but do exist.
A more extremem case of this dynamics are OIDC/OAuth access tokens. Which are far more prone to leak then certs, but in turn are only valid for a short time (max 5min) and due to that normally don't have a revocation system. (Thinks are different for the refresh token you use to get the access token, but the refresh token also is only ever send to the auth server which makes handling that way easier.)
Revocations works great in theory, and in theory & practice particularly in DOD.
The problem is a ton of certificate authorities consciously chose not to produce validation data previously, created insecure CAs, chose not to cache validation data, had knee jerk reactions to potential exposures, and many industries chose not to invest in technical capability to make revocation data available, performant, resilient, failing-over, failing gracefully, etc.
MITM is now the default for half the enterprise security solutions operating with cert to website “suspected good whitelists” which makes new domains on HN nigh unreadable
No, but it reflects poorly on the maintainer. Plus, any browser complaint contributes to error fatigue. Users shouldn't just ignore these, and we shouldn't encourage them to ignore them just because we fail at securing our websites.
There is a reason why certs have expiration dates. It's to control the damage after the site owner or someone else in the cert chain messes up. That doesn't mean expired certs are inherently not secure, only that you should still care about it and do your best to avoid using expired certs.
Certificate revocations are not required to be reported after the expiration date, so you can no longer reliably check if a certificate has been revoked (e.g., because its underlying key was exfiltrated or because it was misissued).
I think the argument would go that if people are clicking through certificate errors and you're in a position to MITM their traffic, you can just serve them a different certificate and they'll click through the error without noticing or understanding the specifics.
That could happen either way regardless of expiry. The only reason for an expiration date is to force site owners to cycle their certs at regular intervals to defeat the long time it takes to brute force a successful forgery.
Fair point, but I think the situation is a bit more complicated when a user "needs the site for work", or something urgent. You might have smart cautious users that feel like they have no choice but to proceed and click through the warnings since the site is most likely still legitimate
It's true that the expiration doesn't mean the encryption no longer works, but if the user is under a MITM attack and is presented by their browser with a warning that the certificate is invalid, then the encryption will still work but the encrypted communication will be happening with the wrong party.
I don't trust the average user to inspect the certificate and understand the reason for the browser's rejection.
Okay, but that’s not what was being asked. OP, someone who presumably understands the difference between a totally invalid cert and an expired one, was asking specifically whether clicking through the latter is dangerous.
"Visitors to the site are vulnerable to Man in the Middle (MitM) attacks, IF they click past the warning". I think it's true when there is a man in the middle.
It's entirely the second paragraph and not part of certificate expiration, in and of itself, lends to being MITM. Firefox tells me what the problem is, expired, wrong name, etc. So, it's not just saying "oh no, something is wrong." I can tell what is wrong before I choose to proceed.
If you're ignoring certificate warnings, then you'll ignore mismatching domain warnings.
More over, if your org's browser setting allow you to override the warnings, thast also pretty bad for anything other than a small subset of your team.
but, in terms of security, wrong cert looks the same to most people as wrong domain. So once people get used to the cert being wrong and click through anyway, its easy to switcheroo and do a half arsed man in the middle.
But, it signifies that people are not monitoring the outside of the network. We have cert checks which alert if they are less than 5 days from expiry, as they should have been renewed and restarted automatically. If you're not doing that, what else are you halfarsing?
Not inherently, but it can introduce risk. Such as a bad actor using an old expired certificate it was able to acquire to play man-in-the-middle. But if that is happening you have bigger problems.
Everyone, including every professional network engineer, does this regularly. I've never seen a TLS error message that actually reflected a security issue, as opposed to a configuration problem.
I'm not actually sure if browsers validate expired certificates. I couldn't find out if an invalid cert authority and an invalid date would look different than just the error for an invalid date. Educated guess says so, but that's just a guess.
The biggest concern is you'd hope the DoD (/DoW) is on top of their stuff, especially the DISA. This is a sign they are not. This is something that should never happen.
But then, there's this message:
> DoD Cyber Exchange site is undergoing a TSSL Certification renewal resulting in download issues for some users. Users on civilian network can continue downloads through the Advance tab in the error message.
Uh oh!!! This is concerning because (1) "Ignore SSL errors" is something you should never be telling users to do and (2) this is extra concerning because whoever wrote this does not seem to have a grasp on the English language:
- "TSSL Certification renewal" should be TLS/SSL Certificate renewal. (Caveat: Defense is full of arcane internal acronyms and TSSL could just be one of them.)
- "Users on civilian network" should be "Users on civilian networks", or "Users on a civilian network".
- "Advance tab" should be "Advance button".
So, we have three glaring red flags. Expired certs, telling users to ignore cert warnings, and various spelling and grammar mistakes.
People are citing the short 40-day certificate renewal window, but that's not the problem here. It's not a case of administration transition either. This cert was issued 2025-Mar-20 and was valid for 1 year. But IdenTrust DoD certs can't be renewed after they expire, so that might be why this is so broken.
In the most generous interpretation, the once-responsible party was cut with the huge DOGE cuts back in May 2025, and this failure of web administration is just one visible sign of the internal disarray you'd expect with losing 10% of your workforce.
Well it means that you can MITM a user and they won't know the difference (an expired cert is an expired cert, whether it's self-signed or not, the user clicks through anyway). It also means nobody is doing the regular maintenance to rotate keys and do upgrades/patches/etc.
Inherently, not really. An expired, self-signed or even incorrect (as in, the wrong domain is listed) certificate can be used to secure a connection just as well as a perfectly valid certificate.
Rather, the purpose of all of these systems (in theory) is to verify that the certificate belongs to the correct entity, and not some third party that happens to impersonate the original. It's not just security, but also verification: how do I know that the server that responds to example.com controls the domain name example.com (and that someone else isn't just responding to it because they hijacked my DNS.)
The expiration date mainly exists to protect against 2 kinds of attacks: the first is that, if it didn't exist, if you somehow obtained a valid certificate for example.com, it'd just be valid forever. All I'd need to do is get a certificate for example.com at some point, sell the domain to another party and then I'd be able to impersonate the party that owns example.com forever. An expiration date limits the scope of that attack to however long the issued certificate was valid for (since I wouldn't be able to re-verify the certificate.)
The second is to reduce the value of a leaked certificate. If you assume that any certificate issued will leak at some point, regardless of how it's secured (because you don't know how it's stored), then the best thing you can do is make it so that the certificate has a limited lifespan. It's not a problem if a certificate from say, a month ago, leaks if the lifespan of the certificate was only 3 days.
Those are the on paper reasons to distrust expired certificates, but in practice the discussion is a bit more nuanced in ways you can't cleanly express in technical terms. In the case of a .mil domain (where the ways it can resolve are inherently limited because the entire TLD is owned by a single entity - the US military), it's mostly just really lazy and unprofessional. The US military has a budget of "yes"; they should be able to keep enough tech support around to renew their certificates both on time and to ensure that all their devices can handle cert rotations.
Similarly, within a network you fully control, the issues with a broken certificate setup mostly just come down to really annoying warnings rather than any actual insecurity; it's hard to argue that the device is being impersonated when it's literally sitting right across from you and you see the lights on it blink when you connect to it.
Most of the issues with bad certificate handling come into play only when you're dealing with an insecure network, where there's a ton of different parties that could plausibly resolve your request... like most of the internet. (The exception being specialty domains like .gov/.mil and other such TLDs that are owned by singular entities and as a result have secondary, non-certificate ways in which you can check if the right entity owns them, such as checking which entity the responding IP belongs to, since the US government literally owns IP ranges.)
Anything that relies on certificate validation might not be working because of it.
I didn't see it mentioned, so I'll add the Broken Windows Theory. Simply put, this is at the very least, indicative of serious failures in other areas.
telling users on a cybersecurity website to click past certificate warnings is training them to do the exact thing every security awareness program says never to do. DISA runs the security standards that every defense contractor has to comply with...
Inexcusable but should clarify that cyber.mil and public.cyber.mil are actually different things. Most people downloading from the site are not using public.cyber.mil, so maybe they care less? This is still one of those highly-visible things that is going to bring down the heat quickly, so it's just dumb to let it happen.
For some reason the warning icon is huge on my phone.
Someone please verify that the exclamation point inside of the warning icon has always been gold and that this website's design hasn't fallen victim to Trump's dragon-like gold hoarding obsession.
Not applicable in this case. This was a certificate issued March 20th 2025 and which expired March 20th 2026. Also concerning are the instructions written in broken English instructing visitors to ignore all SSL warnings.
Have you heard of automation? Cron? Certbot? You can schedule cert renewal and it happens automatically. It could be refreshed every 1 day, I don't care. The fact that it's so painful for you means you need to learn a bit more.
Which is yet another chore. And it doesn’t add any security. A certificate expired yesterday proves I am who I am just as much as it did yesterday. As long as the validity length is shorter than how long it would take somebody to work out the private key from the public key, it is fine.
No, they're not useless at all. The point of shortening certificate periods is that companies complain when they have to put customers on revocation lists, because their customers need ~2 years to update a certificate. If CRLs were useless, nobody would complain about being put on them. If you follow the revocation tickets in ca-compliance bugzilla, this is the norm—not the exception. Nobody wants to revoke certificates because it will break all of their customers. Shortening the validity period means that CAs and users are more prepared for revocation events.
... what are the revocation tickets about then? how is it even a question whether to put a cert on the CRL? either the customer wants to or the key has been compromised? (in which case the customer should also want to have it revoked ASAP, no?)
From my experience the biggest complaints/howlings are when the signing key is compromised; e.g., your cert is valid and fine, but the authority screwed up and so they had to revoke all certs signed with their key because that leaked.
Right. It's the same debate about how long authorization cookies or tokens should last. At one point in time--only one--authentication was performed in a provable enough manner that the certificate was issued. After that--it could be seconds, hours, days, years, or never--that assumption could become invalid.
Or that someone asked to renewed it, one of their four bosses didn't sign off the apropriate form, the only person to take that form to whoever does the certs is on a vacation, person issuing certs needs all four of his bosses to sign it off, and one of those bosses has been DOGE-ed and not yet replaced.
expired letsencrypt cert on a raspberrypi at home smells of not paying attention... with governments, there are many, many points of failure.
The whole point of these shorter certificate durations is to force companies to put in automation that doesn't require 14 layers of paperwork. Some companies will be stubborn, and will thus be locked in an eternal cycle of renew->get paperwork started for renew. Most will adapt.
I am curious how long the approval process in some large corp or the military would be for either of those options...
Hand over our private keys to a third party or run this binary written by some volunteers in some basements who will not sign a support contract with us...
I've worked with large "enterprises" that refuse to use the easy-to-automate certificate services, including AWS Certificate Manager. They would rather continue to procure certificates through a third party, email around keys, etc. They somehow believe these archaic practices are more secure.
Isn't that why certificates expire, and the expiry window is getting shorter and shorter? To keep up with the length of time it takes someone to crack a private key?
No, it has nothing to do with the time to crack encryption. It's to protect against two things: organizations that still have manual processes in place (making them increasingly infeasible in order to require automatic renewal) and excessively large revocation lists (because you don't need to serve data on the revocation of a now-expired certificate).
It's also a "how much exposure do people have if the private key is compromised?"
Yes, its to make it so that a dedicated effort to break the key has it rotated before someone can impersonate it... its also a question of how big is the historical data window that an attacker has i̶f̶ when someone cracks the key?
No. The sister comment gave the correct answer. It is because nobody checks revocation lists. I promise you there’s nobody out there who can factor a private key out of your certificate in 10, 40, 1000, or even 10,000 days.
I thought I remembered someone breaking one recently, but (unless I've found a different recent arxiv page) seems like it was done using keys that share a common prime factor. Oops!
On the one side all the users will need to prove their ID to access websites, and on the website side the site will have to ask permission to continue operating at ever increasing frequency.
> DoD Cyber Exchange site is undergoing a TSSL Certification renewal
I'm imagining someone searching around for a consulting or testing company that will help them get a personal TSSL Certification, whatever that is (a quick search suggests that it does not exist, as one would expect). And perhaps they have no idea what TLS is or how any modern WebPKI works, which is extra amazing, since cyber.mil is apparently a government PKI provider (see the top bar).
Of course, the DoD realized that their whole web certificate system was incompatible with ordinary browsers and they wrote a memo (which you have to click past the certificate error to read):
https://dl.dod.cyber.mil/wp-content/uploads/pki-pke/pdf/uncl...
saying that, through February 2024, unclassified DoD sites are permitted to use ordinary commercial CAs.
If the DoD were remotely competent at this sort of thing, they would (a) have CAA records (because their written policy does nothing whatsoever to tell the CA/B-compliant CAs of the world not to issue .mil certificates, (b) run their own intermediate CA that had a signature from a root CA (or was even a root CA itself), and (c) use automatically-renewed short-lived certificates for the actual websites.
cyber.mil currently uses IdenTrust, which claims to be DoD approved. They also, ahem, claim to support ACME:
> In support of the broader CA community, IdenTrust—through HID and the acquisition of ZeroSSL—actively contributes to the development and maintenance of major open-source ACME clients, including Caddy Server and ACME.sh. These efforts help promote accessibility, interoperability, and automation in certificate management.
Err... does that mean that they actually support ACME on their DoD-approved certificates or does that mean that they bought some companies that participate in the ACME ecosystem? (ACME is not amazing except in contrast to what came before and as an exercise in getting something reasonable deployed in a very stodgy ecosystem, but ACME plus a well-designed DNS-01 implementation plus CAA can be very secure.)
The offending certificate is:
At least the site uses TLS 1.3.First, the issues you describe affect only unclassified public-facing web services, not internal DoD internet services used for actual military operations. DoD has its own CA, the public keys for which are not installed on any OS by default, but anyone can find and install the certs from DISA easily enough. Meaning, the affected sites and services are almost entirely ones not used by members of the military for operational purposes. That approach works for internal DoD sites and services where you can expect people to jump through a couple extra hoops for security, but is not acceptable for the general public who aren't going to figure out how to install custom certs on their machine to deal with untrusted cert errors in their browser. That means most DoD web infra is built around their custom PKI, which makes it inappropriate for hosting public sites. Thus anyone operating a public DoD site is in a weird position where they deviate from DoD general standards but also aren't able to follow commercial standard best practices without getting approval for an exception like the one you linked to. Bureaucratically, that can be a real nightmare to navigate, even for experienced DoD website operators, because you are way off the happy path for DoD web security standards.
Second, many DoD sites need to support mTLS for CAC (DoD-issued smartcards) authentication. That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Third, most of these sites and services run on highly customized, isolated DoD networks that are physically isolated from the internet. There's NIPR (unclassified FOUO), SIPR (classified secret), and JWICS (classified top secret). NIPR can connect to the regular internet, but does so through a limited number of isolated nodes, and SIPR/JWICS are entirely isolated from the public internet. DoD cloud services are often not able to use standard commercial products as a result of the compatibility problems this isolation causes. That puts a heavy burden on the engineers working these problems, because they can't just use whatever standard commercial solutions exist.
Fourth, the DoD has only shifted away from traditional old school on-prem Windows Server hosting for website to cloud-hosting over the past few years. That has required tons of upskilling and retraining for DoD SREs, which has not been happening consistently across the entire enterprise. It also has made it much harder to keep up with the standards in the private sector as support for on-prem has faded, while the assumptions about cloud environments built into many private sector solutions don't hold true for DoD.
Fifth, even with the move to cloud services, the working conditions can be so extraordinarily burdensome and the DoD-specific restrictions so unusual, obscure, poorly documented, and difficult to debug that it dramatically slows down all software development. e.g., engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
Ultimately, the sites this affects are ones which are lower priority for DoD because they are not operationally relevant, and setting up PKI that can easily service both their internal mTLS requirements and compatibility with commercial standards for public-facing sites and services is not totally straightforward. That said, it is an inexcusable shitshow. Having run CAC-authenticated websites, I can tell you it's insane how much dev time is wasting trying to deal with obscure CAC-related problems, which are extremely difficult to deal with for a variety of technical and bureaucratic reasons.
Anyone who gets a CAC working on a personal computer deals with this all too much. The root certs DoD uses are not part of the public trusted sources that commonly come installed in browsers.
Is this actually all the way technically correct? As far as I know, there is no requirement that the trust chains for server certificates and client certificates are in any way related. It seems to me that it would be perfectly possible for the DoD to use its own entirely private client certificate infrastructure but to still have the server certificate use something resembling an ordinary root certificate.
This is not to say that this would actually be all that worthwhile.
This hits too close to home. I'm sending you my therapist's bill for this month.
That's probably one of the things they were forced to contract out.
[1] https://en.wikipedia.org/wiki/Automatic_Certificate_Manageme...
When the DoD forgets to renew the cert for their cybersecurity download website AND can't figure what a A TLS cert even is (calling it a "TSSL Certification"), this is an indicator that our military has absolutely zero understanding of the most basic cybersecurity concepts.
If you can't tell the difference between a hobbyist forgetting to renew their Let's Encrypt cert, vs. a trillion-dollar military not even knowing what a certificate is, maybe you should work for our military, because they can't tell the difference either.
They are literally telling users to click through the browser errors about the bad cert. They don't mention that there is a very specific error they should be looking for (expired cert). This gives any MITMer the opportunity right now to replace downloaded executables with malware-laden ones using nothing more than a self-signed cert and a proxy. You can bet your boots China, NK, Iran, Russia are all having a good laugh. Biggest military in the world and they can't get a web server working.
It blows me away that a bank can't afford to do for themselves what Certbot and Lets Encrypt does for me, for free.
Like, pay a guy a whole week to automate this and it will save you the 12hrs losses every time your cert expires.
The fact the browsers stopped recognizing this is political, not based on any reality of sense. Everyone appeals to authority what the best way to do TLS is, and the problem is the authority is stupid.
My suspicion is that corporations in general don‘t handle tasks well that need to follow an exact timeline and can‘t be postponed by a week or two.
Yeah - I have almost never seen any corporate or government environment actually take a "forward-thinking" approach to any of the above...
Automated certificate renewal is maybe supported by 10% of services I operate where I work. And we're pretty modern. An organization with more legacy platforms is likely at "nothing supports automated renewal".
We are a decade or two out from 47 day expiry being a sane concept.
To make things easy they usually all use different cert formats as well, requiring you to have an arsenal of conversion scripts ready.
In this case, “custom” means firewalls made by pretty much any of the major vendors.
Cisco, Juniper, Fortinet and Palo Alto have a lot to answer for with their laziness. Cisco and Fortinet added support only recently. Palo and Juniper haven’t bothered at all.
But yeah a lot of Windows server software uses inbuilt web servers with no ability to tweak or tamper beyond what the application exposes in its own settings panel.
I work in a multinational nightmare corp, we still have a mission critical Win95 machine.
However, an org (particularaly a .mil) not renewing its TLS certs screams of extreme incompetence (which is exactly what expiries are meant to protect you from.)
Not unheard of with the military
Instrinsically no, but at the same time its generally easier to source an expired identity document. If its long expired the verification standards might be different. People dont really put as much effort into destroying or revoking expired documents. The info might no longer be accurate.
The only addition in the computer context is it trains users to blindly click through warnings. If this becomes a pattern they will eventually not realize it if the error becomes something more serious.
- TLS certificates do leak, not just due to worst case bugs like heart blead
- revocation does not work well in practice
- affected operators aren't always aware if a certificate leaked
so by having expiration (and in recent years increasingly short validity duration) you reduce how the consequences of an leak, potentially to nothing if the attacker only gets their hand on the cert after it expired
this also has the unintended consequence that a long time expired certificate leaking isn't seen as a security issues, nor will you revoke it (it's already invalid).
But if you visit site with expired certificates you have the problem that you only know it had been valid in the past. You don't know if it was leaked after it became invalid or similar. I.e. you can't reasonable differentiate anymore between "forgotten to renew" and MITM attack. At which point it worth pointing out that MITM attacks aren't just about reading secrets you send, but can also inject malicious JS. And browser sandbox vulnerabilities might be rare but do exist.
A more extremem case of this dynamics are OIDC/OAuth access tokens. Which are far more prone to leak then certs, but in turn are only valid for a short time (max 5min) and due to that normally don't have a revocation system. (Thinks are different for the refresh token you use to get the access token, but the refresh token also is only ever send to the auth server which makes handling that way easier.)
The problem is a ton of certificate authorities consciously chose not to produce validation data previously, created insecure CAs, chose not to cache validation data, had knee jerk reactions to potential exposures, and many industries chose not to invest in technical capability to make revocation data available, performant, resilient, failing-over, failing gracefully, etc.
MITM is now the default for half the enterprise security solutions operating with cert to website “suspected good whitelists” which makes new domains on HN nigh unreadable
Please reflect on the site guidelines. https://news.ycombinator.com/newsguidelines.html
I don't trust the average user to inspect the certificate and understand the reason for the browser's rejection.
Telling people not to worry about expired cert warnings makes them vulnerable to a variety of attacks.
More over, if your org's browser setting allow you to override the warnings, thast also pretty bad for anything other than a small subset of your team.
but, in terms of security, wrong cert looks the same to most people as wrong domain. So once people get used to the cert being wrong and click through anyway, its easy to switcheroo and do a half arsed man in the middle.
But, it signifies that people are not monitoring the outside of the network. We have cert checks which alert if they are less than 5 days from expiry, as they should have been renewed and restarted automatically. If you're not doing that, what else are you halfarsing?
The whole thing is silly.
The biggest concern is you'd hope the DoD (/DoW) is on top of their stuff, especially the DISA. This is a sign they are not. This is something that should never happen.
But then, there's this message:
> DoD Cyber Exchange site is undergoing a TSSL Certification renewal resulting in download issues for some users. Users on civilian network can continue downloads through the Advance tab in the error message.
Uh oh!!! This is concerning because (1) "Ignore SSL errors" is something you should never be telling users to do and (2) this is extra concerning because whoever wrote this does not seem to have a grasp on the English language:
- "TSSL Certification renewal" should be TLS/SSL Certificate renewal. (Caveat: Defense is full of arcane internal acronyms and TSSL could just be one of them.)
- "Users on civilian network" should be "Users on civilian networks", or "Users on a civilian network".
- "Advance tab" should be "Advance button".
So, we have three glaring red flags. Expired certs, telling users to ignore cert warnings, and various spelling and grammar mistakes.
People are citing the short 40-day certificate renewal window, but that's not the problem here. It's not a case of administration transition either. This cert was issued 2025-Mar-20 and was valid for 1 year. But IdenTrust DoD certs can't be renewed after they expire, so that might be why this is so broken.
In the most generous interpretation, the once-responsible party was cut with the huge DOGE cuts back in May 2025, and this failure of web administration is just one visible sign of the internal disarray you'd expect with losing 10% of your workforce.
Rather, the purpose of all of these systems (in theory) is to verify that the certificate belongs to the correct entity, and not some third party that happens to impersonate the original. It's not just security, but also verification: how do I know that the server that responds to example.com controls the domain name example.com (and that someone else isn't just responding to it because they hijacked my DNS.)
The expiration date mainly exists to protect against 2 kinds of attacks: the first is that, if it didn't exist, if you somehow obtained a valid certificate for example.com, it'd just be valid forever. All I'd need to do is get a certificate for example.com at some point, sell the domain to another party and then I'd be able to impersonate the party that owns example.com forever. An expiration date limits the scope of that attack to however long the issued certificate was valid for (since I wouldn't be able to re-verify the certificate.)
The second is to reduce the value of a leaked certificate. If you assume that any certificate issued will leak at some point, regardless of how it's secured (because you don't know how it's stored), then the best thing you can do is make it so that the certificate has a limited lifespan. It's not a problem if a certificate from say, a month ago, leaks if the lifespan of the certificate was only 3 days.
Those are the on paper reasons to distrust expired certificates, but in practice the discussion is a bit more nuanced in ways you can't cleanly express in technical terms. In the case of a .mil domain (where the ways it can resolve are inherently limited because the entire TLD is owned by a single entity - the US military), it's mostly just really lazy and unprofessional. The US military has a budget of "yes"; they should be able to keep enough tech support around to renew their certificates both on time and to ensure that all their devices can handle cert rotations.
Similarly, within a network you fully control, the issues with a broken certificate setup mostly just come down to really annoying warnings rather than any actual insecurity; it's hard to argue that the device is being impersonated when it's literally sitting right across from you and you see the lights on it blink when you connect to it.
Most of the issues with bad certificate handling come into play only when you're dealing with an insecure network, where there's a ton of different parties that could plausibly resolve your request... like most of the internet. (The exception being specialty domains like .gov/.mil and other such TLDs that are owned by singular entities and as a result have secondary, non-certificate ways in which you can check if the right entity owns them, such as checking which entity the responding IP belongs to, since the US government literally owns IP ranges.)
I didn't see it mentioned, so I'll add the Broken Windows Theory. Simply put, this is at the very least, indicative of serious failures in other areas.
TSSL renewal does not cause downtime.. If it's actually done of course.
Good stuff.
…or were you referring to the piss-poor English used? ^_^
Someone please verify that the exclamation point inside of the warning icon has always been gold and that this website's design hasn't fallen victim to Trump's dragon-like gold hoarding obsession.
Is there more..?
Checked on Chrome too, I see nothing.
iOS Chrome
I captured the full page, you can view it here: https://wormhole.app/MbljK6#qfysvKJOQh1whLcMz9JXxw
https://wormhole.app/9Xv0p0#Hsq0fhLpWsr8ndJDktt2YQ
You see the little "Red hat Enterprise" at the bottom? That's the whole scrollable area. The rest is fixed and stays at the top.
They still messed up the CSS because the downloads table goes straight beyond the mobile viewport on the bottom and to the right.
Mistakes happen, some automation failed and the certs did not renew on time, whatever. Does not inspire confidence but we all know it happens.
But then to just instruct users to click through the warning is very poor judgement on top of poor execution.
The certificate they failed to renew was issued 2025-Mar-20th, and expired 2026-Mar-20th. That is a 365 day cert.
The maximum length for a new cert is now 200 days, with the 47 day window coming in three years: https://www.digicert.com/blog/tls-certificate-lifetimes-will...
can you elaborate on this a bit? thank you!
E.g., collateral damage.
And a short expiration time absolutely increases security by reducing attack surface.
expired letsencrypt cert on a raspberrypi at home smells of not paying attention... with governments, there are many, many points of failure.
use cloudflare, never think about it.
or
use certbot, never think about it.
Hand over our private keys to a third party or run this binary written by some volunteers in some basements who will not sign a support contract with us...
The whole point was to force automation, and if corps want to be stubborn that's no skin of my back, the shorter durations are coming regardless.
Yes, its to make it so that a dedicated effort to break the key has it rotated before someone can impersonate it... its also a question of how big is the historical data window that an attacker has i̶f̶ when someone cracks the key?
Fwiw: https://arxiv.org/abs/2512.22720
That is the future we have walked into.