Normally I wouldn't link to meta discussion but this was such a weird borderline case that I spent over an hour trying to figure it out. Maybe that makes it interesting.
Imo it's a brand new thing so it deserves this technical analysis, the follow up also deserves its own post because of its both importance and political/security nature (government app attacked).
It would be if Google announced Gmail and there was a technical analysis and then it was hacked the same day, I would hope there would be a post for that.
Short version: it's not possible to have separate discussions in the way you describe. They would just get totally blended.
I like your Gmail analogy but I don't think it applies here. The "technical analysis" article is driven by the same political/security concerns as the "hacked" update.
FWIW, I never clicked into this when I originally saw it because I'm not that interested in a "technical analysis", but gained interest when the other title said that the app was hacked. To me, that's worth discussing, but here that lede is a bit buried. And I now only know about it because a friend sent me the link.
I do feel there's a pattern of me reading some interesting tech news, then thinking "wait, why didn't I see this discussed on HN?", to searching for it and finding a buried/flagged HN discussion due to it being somewhat tied to politics (what isn't?)
Still trying to grasp the idea of archiving messages from E2E encrypted communication system into a storage that entirely breaks the purpose of using something like Signal.
It’s like encashing on the trust of Signal protocol, app while breaking its security model so that someone else can search through all messages.
OK, say you're a bank. The SEC states you need to keep archives of every discussion your traders have with anyone at any time (I'm simplifying things but you get the point). You keep getting massive fines because traders were whatsapping about deals
So now you've got several options - you can use MS Teams, which of course offers archival, compliance monitoring etc. But that means trusting MSFT, and making sure your traders only use Teams and nothing else. You can use a dedicated application for the financial industry, like Symphony or ICE Chat or Bloomberg, but they're clunkier than B2C apps.
And then the Smarsh (owners of Telemessage) salesman calls you, and says "your users can keep using the apps they love - WhatsApp, Signal - but we make it compliant". And everyone loves it (as long as no-one in your Security or Legal teams are looking too hard at the implications of distributing a cracked version of WhatsApp through your MDM...)
That is more than overly optimistic given how slow the pace of any technical innovation in finance is. The recent and not so recent issues with Citi are a good example of that.
It definitely doesn't resolve the trust issue! I would trust MSFT a million times more than these cowboys. What it does give you is peace with your traders (who can be real divas..) - they can keep using "WhatsApp" and "Signal" and you can monitor everything
ok, this absolutely reminds me of using indian whatsapp mods years ago. stickers, more features, local and portable backups... wouldn't try that as a member of the government though
Huh? If the goal is compliance, you wouldn't use something that's worse for compliance - which is why the Legal and Security wouldn't like it. If it helped with compliance, they'd love it! So the reason can't be compliance.
You can never control what I do on my device with the message received- I can make screenshots, or, if the app prevents that, take a picture of the screen.
The goal of signal is trusted end-to-end encrypted communication. Device/Message security on either end is not in scope for Signals threat model.
The trust level required with Signal is, "do I trust the people in this chat not to share the specific communications I am sending to them with some other party whom I do not want to have a copy".
There are many many situations where this level of trust applies that "trust" in the general sense does not apply. It is a useful property.
And if you don't have that level of trust, don't put it in writing.
TM SGNL changes the trust required to, "do I also trust this 3rd party not to share the contents of any of my communications, possibly inadvertently due to poor security practices".
This is a categorical and demonstrably material difference in security model. I do not understand why so many are claiming it is not.
>TM SGNL changes the trust required to, "do I also trust this 3rd party not to share the contents of any of my communications, possibly inadvertently due to poor security practices".
That's the same level of trust really. Signal provides a guarantee that message bearer (i.e. Signal) can't see the contents, but end users may do whatever.
You can't really assume that counterparty's device isn't rooted by their company or they are themselves required by law to provide written transcripts to the archive at the end of each day. In fact, it's publicly known and mandated by law to do so for your counterparty that happens to be US government official.
The people who assume that they are talking with one of the government officials and expect records not to be kept are probably doing (borderline) illegal, like talking treason and bribes.
No, this is not a "nothing to hide argument", because those people aren't sending dickpics in their private capacity.
> This is a categorical and demonstrably material difference in security model. I do not understand why so many are claiming it is not.
Because all it takes is one user to decide they trust the third party.
Right now you actually have to do more than trust everyone, you have to trust everyone they trust with their chat history. Which already can include this sort of third party.
Any client-side limitations are not part of the security model because you don't control other people's devices. Even with an unmodified app, they're trivially bypassed using a rooted/jailbroken device.
Not part of Signal's security model, but trusting people in that chat very much can and should be part of the user's security model. If you don't trust them, why are they in the chat in the first place?
It's not a person in the chat, it's an account. The account is usually controlled by the person associated with it, but you can't assume that it's always controlled by that person.
Well the ex-wife in question can be trusted to receive it a-okay and screenshot them to send to her lawyer and cops too, depending on contents. So do US government officials. Now we just know how exactly they do it.
There are compliance reasons where you want the communications encrypted in flight, but need them retained at rest for compliance reasons. Federal record keeping laws would otherwise prohibit the use of a service like Signal. I'm honestly impressed that the people involved actually took the extra effort for compliance when nothing else they did was above board...
Makes sense. But still debatable if the compliance requirements are acting against the security model or perhaps there are biggest concerns here than just secure communication.
One of the most popular “e2ee”
communication systems, iMessage, does exactly this each night when the iMessage user’s phone backs up its endpoint keys or its iMessage history to Apple in a non-e2ee fashion.
This allows Apple (and the US intelligence community, including FBI/DHS) to surveil approximately 100% of all non-China iMessages in close to realtime (in the usual case where it’s set to backup cross-device iMessage sync keys).
(China, cleverly, requires Apple to not only store all the Chinese iCloud data in China, but also requires that it happen on machines owned and operated by a joint venture with a Chinese-government-controlled entity, keeping them from having to negotiate continued access to the data the way the FBI did.)
Yet Apple can still legitimately claim that iMessage is e2ee, even though the plaintext is being backed up in a way that is readable to them. It’s a backdoor by another name.
Everyone wins: Apple gets to say E2EE, the state gets to surveil the texts of everyone in the whole country without a warrant thanks to FISA.
I suppose if both you and the recipient have cloud backups disabled, then Apple can no longer view your messages.
But outside of that scenario, is there any advantage to iMessage using e2ee instead of just regular TLS?
Edit: Apparently it's up to you whether you want your iCloud backups to use e2ee. There's an account setting: https://support.apple.com/en-us/102651. Standard protection is a sensible default for regular who aren't tech-savvy, as with e2ee they're at risk of losing all their iCloud data if they lose their key.
Are there any stats as to the percentage of iPhone users that enable Advanced Data Protection? Defaults matter a lot, and I wouldn't be surprised if that number is (well) below 10%.
If you are the only person out of all the people you correspond with who has ADP enabled, then everyone you correspond with is uploading the plaintext of your messages to Apple.
You have to remember that there are something like a billion+ iOS users out there. 100 million people have not written down their 27 character alphanumeric account recovery key.
Correct, but nobody turns it on because it’s opt in, and even if you turn it on, 100% of your iMessages will still be escrowed in a form
readable to Apple due to the fact that the other ends of your iMessage conversations won’t have ADP enabled because it’s off by default.
Again, Apple gets to say “we have e2ee, any user who wants it can turn it on” and the FBI gets to read 100% of the texts in the country unimpeded.
If Apple really wanted to promote privacy, they’d have deployed the so-called “trust circle” system they designed and implemented which allowed a quorum of trusted contacts to use their own keys to allow you to recover your account e2ee keys without Apple being able to access it, rolled that out, and then slowly migrated their entire user base over to e2ee backups.
They have not, and they will not, because that will compromise the surveillance backdoor, and get them regulated upon, or worse. The current administration has already shown that they are willing to impose insanely steep tariffs on the iPhone.
You can’t fight city hall, you don’t need a weatherman to know which way the wind blows, etc. The US intelligence community has a heart attack gun. Tim Apple does not.
Separately it is an interesting aside that Apple’s 1A rights are being violated here by the presumptive retaliation should they publish such a migration feature (software code being protected speech).
TBF, governments trying to outlaw some kind of privacy doesn't necessarily mean it's a current impediment to them. They can be planning ahead, securing their position, or just trying to move the window of what is considered acceptable.
The same applies to WhatsApp.
Messages backups are unencrypted by default and even the whole iPhone backup includes the unencrypted chat history of WhatsApp by default.
One reason why it was a big deal for UK to disable iCloud’s E2EE backup.
Maybe someone wanted to please the procedure of law but also had to please the bros. The result is a hack of a secure program that adds conversation archiving.
The bigger story is the follow up that shows someone already hacked telemessage because the app seems to be vulnerable to several exploits (and transmits data in the clear apparently).
White House communications director previously revealed (after “Signalgate”) that Signal was an approved and whitelisted app for gov’t officials to have on work phones and even discuss top-secret matters on. But I haven’t heard that TeleMessage was approved (and I’d have serious questions if it were given the foreign intelligence factor). Anyone know if there is a clear answer to whether it’s been approved?
It was incontrovertibly approved as it is only installable via MDM.
A likely explanation is that the communications director (or the people informing her) wouldn’t know to distinguish between Signal the app, and a Signal compatible app that is nearly indistinguishable from Signal. A lot like Kleenex is a common term for tissue paper regardless of brand.
When the leak was first revealed, there was loud speculation about the legality of government chat messages being set to auto-delete. This additional revelation, about the use of TeleMessage, shows that someone with a security background has actually thought about these things. It makes perfect security sense to archive messages somewhere secure, off phone, for record keeping compliance while ensuring that relatively vulnerable phones don’t retain messages for very long. It’s also an easy explanation for why such an app was created in the first place. There is an obvious market for it.
> It was incontrovertibly approved as it is only installable via MDM.
Only if this his standard govt issued phone. It's also been shown they are also using their own personal phones. The could easily be using unapproved phones some random DOGE'er bought gave them with an MDM setup, without any real oversight.
This is currently my bet. This looks like something I would set up— state actors are not in my threat list. But, I’m usually being paid to protect the employer not the employee.
> This additional revelation, about the use of TeleMessage, shows that someone with a security background has actually thought about these things.
We only have evidence they used TeleMessage after the scandal. When the same guy let the press take a photo of his messages with Vance, Rubio, Gabbard and others.
If DOGE can storm into government offices and get root access to sensitive system without proper procedure, couldn't SECDEF and co. strong arm their way past the IT worker managing the MDM?
According to the new 404 Media article [0] about the app's archive server actually being hacked, TeleMessage does have contracts with several governmental agencies. Still not a direct answer to the question, I know, but it tilts the answer overwhelmingly towards "yes."
This is so frightening. I worked in corporate security, and that was occasionally a leaking ship, but this wouldn’t even fly with our engineers even if we wanted their message history. This is negligence.
On a more meta note, I wonder who even works at companies founded on ideas that are just... bad. On average, I expect good engineers to push back on such business requirements and also have better job mobility so they can leave and work elsewhere. The researcher found the vulnerabilities "in less than 30 minutes" so it seems there's some lack of competence here.
Unfortunately, misguided business requirements like this won't simply disappear and I get that those can be niche offerings that attract juicy contracts.
Casinos, scams (both of these Web3 as well as traditional), game hack developers, ransomware and database hackers. Adtech, which thousands of HNers work in (anyone at Google). Temu, Shein, gacha/lootbox games, dopamine drug dealers (Meta, Bytedance). NSO group, spyware. Policeware, Clearview, surveillance tech. You could name defense as well, but I find that more ambiguous.
I wouldn't be surprised if it at least 25% of HN has worked for such companies for at least 2 years of their career.
The reality is that its a dog eat dog world out there. I know people who worked in adtech. Yeah, they thought it sucked too and was boring stupid work compared to doing something cool. But it paid the bills, and interesting work is hard to land even without having to pivot into it mid career.
People generally need jobs, and some of these jobs aren't so good. Not everyone is talented enough to work at the next hot startup building a frontend to ChatGPT.
The correct answer is no one outside US Government IT knows for sure what is or isn't approved per their own rules. Every article (and comments therein) are just speculation and people trying to confirm their own biases, desperately looking for something to blame someone for, to produce more rage-bait and thus feed more ad clicks.
Every single article is written with the presumption that there are no actual IT people in the White House, that someone wheeled in a Starlink dish on a dessert cart in the yard which is somehow running the entire government. It's silly and ridiculous.
> The correct answer is no one outside US Government IT knows for sure what is or isn't approved per their own rules
Veterans Affairs actually publishes a list of approved software as part of their Technical Reference Model: https://www.oit.va.gov/services/trm/ (don’t know how complete it is)
But I’m not aware of other agencies doing this. I suppose that VA, given the nature of what they do, likely feels that there is less risk in publicising this information
There’s also the FedRAMP program for centralized review of cloud services - fedramp.gov - I haven’t looked to see if Telemessage is listed as approved but I see some references to FedRAMP and Telemessage online suggesting that it may be
Another source of info is SAM.gov - https://sam.gov/opp/ab5e8a486e074d73bfe09b383ba819ab/view (that’s for NIH) - if there is an agency paying for it, you can assume they’ve approved it for use (or are in the process of doing so) even if they haven’t otherwise publicly said they are. But, not all contracts are public, so just because you can’t find it on SAM.gov doesn’t mean it doesn’t exist
>that someone wheeled in a Starlink dish on a dessert cart in the yard
That situation was ridiculous, in that to score the marketing points, but fighting with the whitehouse IT the starlink is installed at a remote location with much the same point of failure as their fibre services.
A few decades ago, the Republican party had one foot in the anti-intellectual camp, but only one.
They were the party of young-earth creationists, religious pro-lifers, climate-deniers and gun-lovers - but also of educated fiscally conservative folks. The party would welcome economics professors and leaders of medium-sized businesses, promising no radical changes, no big increases in spending or regulation, and a generally pro-market/pro-business stance.
The genius of Trump was in realising the educated fiscally conservative folk were driving 95% of the republican policy agenda but only delivering 10% of the votes. The average Republican voter loves the idea of disbanding the IRS and replacing all taxes with tariffs on imports. Sure, you lose the educated 10% who think that policy is economic suicide - but you can more than make up for it with increased turn-out from the other 90% who are really fired up by the prospect of eliminating all taxes.
And it works - jumping into the anti-intellectual camp with both feet has delivered the house, the senate, the presidency (electoral college and popular vote), and the supreme court.
The conservative movement has a brain-drain because they've realised they don't want the votes of smart, educated people.
Their take on scripture is deliberately anachronistic. We didn’t have the medicine or sanitation 2000 years ago to place their kind of value on a fetus.
The medicine in question comes from the very scientific establishment that grew out of scholasticism, which is why I find the accusation of anti-intellectualism rather strange.
My point is that you have to distinguish between arguing against the output of the intellectual activity and arguing against the intellectual activity taking place.
Would be interesting to dump the app binaries so people can take a look at how its put together, I suspect its a minefield of sloppy injection functions into how signal works.
I felt the writer implied open source code was a bad/insecure thing, since they downloaded a zip file from some WordPress upload folder. I'm guessing the code was being made available to companies that "legally" obtained TM-SGNL.
>> Signal was an approved and whitelisted app for ... discuss top-secret matters on.
No. Just no. Anyone who has handled TS information would know how nutz that sounds. Irrespective of software, TS stuff is only ever displayed in special rooms with big doors and a man with a gun outside. The concept of having TS on an everyday-use cellphone is just maddening.
You're leaving out crucial information. Obama didn't keep his BlackBerry for classified information, he was given the then-standard government secure mobile communications device, a Secure Mobile Environment Personal Encryption Device (SME-PED).
More specifically, the device Obama was given was a Sectéra Edge [0][1] by General Dynamics, a device specifically designed to be able to operate on Top Secret voice and Secret data networks. It had hardware-level separation between the unclassified and classified sides, even having separate flash memory for both. [2]
The NSA contributed to the design and certified it and another device (L3's Guardian) on the SCIP, HAIPE, Suite A/B, Type 1, and non-Type 1 security protocols.
It was absolutely not a regular BlackBerry, it didn't run any RIM software, no data ever went through RIM's servers, and secure calls were encrypted and didn't use SS7. It was a clunky purpose-designed device for the entire US government to be able to access Secret information and conduct Top Secret voice calls on the go.
Even then, there were limitations to when and where it could be used and when a SCIF was required.
The current equivalent of the SME-PED programme is the DoD's Mobility Classified Capability[3], which are specially customised smartphones again made by General Dynamics.
There is no excuse whatsoever for the current administration's use of Signal, let alone TeleMessage Signal, for Secret and Top Secret discussions on regular consumer and personal devices. It's deeply irresponsible and worse than any previous administration has done.
Your reference [0] appears to contradict what you've said here. It speaks at length about several NSA approved options as alternatives, but says Obama used a BlackBerry.
The photo attached to the article captioned "President-elect Barack Obama checks his BlackBerry while riding on his campaign bus in Pennsylvania last March." appears to show a blackberry.
I take it from the article that this was as controversial as I remember it being at the time. Thanks for posting it.
He was allowed to keep his BlackBerry for personal communication only, not classified communication, and had to use a Sectéra Edge for classified communication. [0]
The Blackberry for personal use wasn't a stock BlackBerry, but hardened by the NSA and fitted with the SecurVoice software package to encrypt voice calls, emails, and messages. The few people he had on his approved communication list were given the same devices.[1]
That BlackBerry was, again, not used for classified communication. So it's not the same thing as the current scandal.
> He was allowed to keep his BlackBerry for personal communication only, not classified communication
Presence of the senior staff on his (very limited) contact list would seem to contradict that statement. Communication with them would be, by definition, not personal.
I agree with you that our government officials should be using the secure infrastructure our patriotic service members and civil servants work so hard to build and maintain.
Obama wasn't allowed to keep his Blackberry; he requested a secure commercial-quality cellphone to communicate with his aides, and NSA (which was, to be sure, not really happy about the request) selected the Blackberry as their platform. The end solution was a highly pared-down device that could only communicate via a hosted encryption server (a commercial product, SecurVoice) to a small number of paired devices, which were distributed to Obama's inner circle. The Presidential devices had additional security limitations (e.g., they could only connect to WHCA-controlled base stations). End of the day, what they had was an encrypted closed network of devices, some of which communicated over public wireless infra, running a very limited, NSA-reviewed, approved, and altered, software suite.
What's clear is that NSA put a fair amount of effort into securing and maintaining that system, so much that its use was limited to the White House; Hillary Clinton wanted a similar setup (her predecessor, Condoleezza Rice, had been allowed to use unaltered "off the shelf" Blackberries under an NSA waiver, but NSA had declined to renew those waivers due to security concerns), but NSA slow-walked and effectively derailed the discussions with State's security team, perhaps because they wanted to limit the amount of technical detail discussed outside the White House, or because they were concerned that State would be unable to provide SecState with the kind of technical support necessary to secure the devices during global travel. (We all know what happened next, of course.)
If you’d prefer, we can call it unclassified communication rather than personal communication. The point is that it was not used for Secret, Top Secret, or other classified communications. For that, he had the SME-PED device.
So, again, it’s not a parallel to the current situation. Nobody is saying the SecDef and other staff shouldn’t have unclassified devices as well as their classified devices, the issue is that they’ve been using the unclassified devices to conduct Secret or Top Secret discussions.
But how could he have created accidentally a conversation for discussing targets during military attack with a journalist if secret communication was not done on his clear-text device ?
I think you're misunderstanding me, I'm referring to Obama's use of an NSA-hardened BlackBerry for unclassified communication with a select group of people, while using a purpose-built and NSA-cleared secure phone for classified communication. All of which was done correctly in terms of information security processes.
Secretary of Defence Hegseth sent Secret or Top Secret information over a channel (Signal/TM Signal and a regular mobile phone) that was never cleared for classified communications. The person I was replying to was trying to equate Obama's actions to those of Hegseth (and Waltz and others), I was providing context showing that to be a false equivalence.
That's not a counter-argument. You're introducing a hypothetical with no substantiating evidence, trying to create a parallel to a situation where we have unambiguous evidence of non-classified devices and software being used to discuss classified material. The onus is on you to prove the claim, not on others to prove a negative.
It has been eight years since Obama's presidency, had there been any use of this hardened BlackBerry for classified communications it would have emerged by now. Similarly, all messages on that device were subject to the Presidential Records Act, and are archived by NARA. You can FOIA them if you want to.
There were also no claims made during his administration that he ignored security protocols. Even his insistence on retaining a BlackBerry for unclassified communications was done through a compromise and an NSA-hardened device, not by ignoring the rules.
Similarly, how do we know that Reagan didn't hold cleartext phone calls with his aides on the Top Secret plans to contain the USSR? We don't, but in the absence of any supportive evidence over the years it's safe to assume he did not.
Person you're replying to is using an "absence of evidence" fallacy as their argument, also known as an "appeal to ignorance" [0]. They're inferring that the absence of evidence that Obama didn't use his BlackBerry "for Secret, Top Secret, or other classified communications" is potentially evidence that he did in fact do so.
(I would have replied to him directly, but the comments have since been [appropriately] flagged)
In reality, no argument could ever be made if you had to prove the negative of every argument. Some other common applications of this fallacy off the top of my head:
"Well we don't have proof that children weren't trafficked in Comet Pizza, so it's proof that it did actually happen."
"We don't have proof that no kids used litterboxes at school, so it's proof that they did use litterboxes."
My statements were complete. You were not completing them, but trying to spin them in a way that implies wrongdoing when no evidence exists of it. I can only presume you're doing so for partisan reasons, to try to defend the actions of the current administration.
Whatever the reason, I have made my case. Feel free to make yours with a similar level of evidence.
How is your voting record public? Who anyone voted for is not a matter of public record, and even if you claimed to disclose it, nobody would be able to fact check that..
Do you have evidence that Obama discussed or viewed topsecret intel on that blackberry or are you just trying to muddy the waters with a false equivalence?
You think he used it only to discuss what flavor of ice cream was being served that day in the whitehouse dining hall? With only the senior staff? If so, I have a bridge for sale which may interest you.
The big part of this story which nobody is talking about is the fact that the app is literally controlled by a bunch of “former” Israeli intelligence officers. Who now have what is arguably the worlds most valuable access out of anyone.
I don't think it's that big: USG procures defense and intelligence tech more or less constantly from Israel. It's unlikely that Israel would threaten that relationship (and the value they extract from it in terms of favorable relations) in exchange for military intelligence that's already shared with them.
(I feel like I have to say this in every thread that insinuates something sinister about being a "former Israeli intelligence officer": the structure of Israel's military and mandatory service is such that just about everybody with technical skills serves in some kind of "intelligence" capacity. It's not a very big country. This is, of course, independent from any normative claims about Israel's government, politics, etc. -- it's what you'd expect in any small country that has mandatory military service with a significant intelligence component.)
Today they may collide in most instances, who's to say tomorrow it will still be the case. For instance when Iran gets the nuclear bomb and threaten Israel with it ?
An encrypted messaging system, used by the American government, is in my opinion even worst than the supposed Huawei 5g antenna data collection.
Huawei wouldn't have had access to secret talk between top government official, at least not decrypted.
> I don't think it's that big: USG procures defense and intelligence tech more or less constantly from Israel. It's unlikely that Israel would threaten that relationship (and the value they extract from it in terms of favorable relations) in exchange for military intelligence that's already shared with them.
Correct - they would not use that intelligence to threaten that relationship, but to maintain it. Knowing the political leanings of politicians and government officials (for example, identifying any that think that relationship is more of a cost than a benefit) is extremely valuable to that end.
The over/under there doesn't make sense: the US hasn't had a meaningfully hostile-to-Israel policy ever, so pervasively tapping some of the most sensitive USG communications would be a stunning risk to take with a very safe ally.
(It also beggars belief in the current climate -- I would be hard-pressed to name a single member of the current administration who hasn't yelled until purple in the face about their support for Israel's current government and wartime policies.)
You might think so, but they didn't face any backlash for buying politicians [1,2] and bragging about it [3], so why would they worry? You also assume that the US is a "very safe ally" naturally, and not as a consequence of means such as these.
You'll note that this case caused exactly the kind of outcome I'm talking about: Pollard was an anomaly (to my knowledge, the only recorded case of a US citizen spying for a US ally) whose activities caused a massive intelligence break between US and Israel that lasted for years and probably did more damage than "good" it served for Israel's intelligence apparatus[1]. That kind of lesson is hard-learned and probably not forgotten, regardless of the fact that Pollard is a poster-boy in Israel's version of a culture war.
Everyone is tapping everyone else to the extent they can get away with it - especially allies, because they can get away with it more. You don't think the NSA monitors every single bit that flows in and out of the USA?
Periodically, someone gets caught red-handed, a fuss is made, some diplomats get thrown out and replaced with other ones, and then everyone continues doing it.
Yep. In general there's been no truly "hostile to Israel" US president. The closest thing to "hostility" has been negotiating (under JFK, Carter, Bush Sr., and Obama most notably) with regards to one or more of Iran, the '67 border, WB settlements, etc. Israel has increasingly (and wrongly) considered these "meddling" under its far-right government, which is a internal change within their own politics rather than a marked change in the US's own tactics.
I’m saying this as someone who almost certainly has a lot more knowledge about intelligence and the US / Israeli relationship than you do.
While some of the points you make are indeed correct it actually paints an inaccurate overall picture.
For example: not widely known but 100% true, Israel is and has been for a long time classified as the highest level of counterintelligence threat to the US on par with China, Russia, Cuba and others.
I assure you, this is a big fucking deal and not something to be waved away with “everyone’s intel, don’t worry it’s probably nothing”.
I would absolutely put "Israel taps the National Security Advisor's phone" in a different category of risk to the two country's relationship than previous activities. This, again, isn't a normative argument.
(A piece of context that's often missing from - typically charged - discussions about US/Israel relationships is the degree of dependence between the two, and how that's varied over the years. Israel's defense policies have historically been informed by a desire to be fully self-sufficient during wartime, i.e. not require active support from countries like the U.S. That policy has been deprioritized over the last 20-30 years, to the point where the US is now a significant active defense provider for Israel, rather than just an arms supplier. This is a dependency relationship that's new to the ongoing conflict, and should color any analysis of Israel's willingness to do things that would threaten its relationship with the U.S.)
What makes you think it didn't already do so in the past and thus is a new thing? Allies spy on each other all the time.
I guess US gov would not like to have it be out publicly, but they must understand that this is being at least attempted and US likely does it to Israel, too.
I'm sure they do. I would expect a little bit more, uh, flair to it than "you bought the spyware from us," though.
My point here is pretty narrow: I'm sure Israel spies on the US, and we spy on them. My only doubt is whether TM SGNL itself is an element of that, or whether it's just another flavor of junk software sold to USG to paper over the gaps between technology and compliance requirements.
People here will flag easily searchable/verifiable information about what Israel did to the US in the past, just to protect the image of the US or whatever.
Well, guess what, it doesn't work. It's just stupid.
The E2E encryption is likely not even relevant, unless I'm missing something?
The builds that are distributed would likely just send the plaintext un-encrypted message separately to the archive, and I'm guessing that means it goes right to TM servers before being dispatched elsewhere.
I know it's pretty fun to do the espionage angle with this comment.
But is this really just evidence that a mandatory draft is actually good economic policy? Having a forced networking event where a bunch of similar skilled individual meet each other seems to be producing a ton of economic value for Israel.
It's not like Israel doesn't already have the highest level of access to the administration's plans. Canada could be made the 51st state and Israel would still have more access to the Trump administrations plans. There is some sort of strong connection between the USA and Israel. What that is, I don't know.
The language of the US under occupation is a neonazi talking point, ZOG (Zionist Occupation Government) being a phrase neonazi morons like. Maybe a coincidence.
I don't like the association any more than you, but what's the greater threat to the United States, the capture of its congress, administration and intelligence community by a foreign power, or a ragtag group of politically and culturally irrelevant LARPers?
Speculation, as no 'technical' analysis could be performed without access to the actual binaries. These aplications are unlisted and otherwise assigned to organisations using device management. This analysis is based on documentation and how this assignment process works. There is no way to determine if an original application got modified, as this would be the same for the WeChat, WhatsApp applications, or that they recompiled the open source version?
There are images from the user's screen, with him on the photograph using the application, showing the chats from the app reproduced verbatim (forwarded) to a GMail account.
The article states that "at least one line of code must've been added" to support such a feature, which I believe to be an honest and accurate assessment.
But it is unknown if the current version was modified to do so. As the name "TM SGNL" looks shortened to fit after hex editing the app. This can all have been achieved by library overloads etc.
> One line
This can also be a single JMP and RTS statement, to a function that makes a screenshot, or something that takes the message.
No technical analysis of a working application has been performed. Just speculation of how this could work. I am not saying Micah is wrong. I just hoped more was available, so an actual disassemble was possible.
I would speculate that they did not recompile from source, but used the same process as used by the other applications. Intrusive by modification of the code execution, by injection, etc. That is speculation from my end, but reuses similar approaches across all of their applications.
I thought the only client allowed on Signal was the official build provided by Signal itself? Does this mean Signal does officially allow another build (Telemark's TM SGNL) access to the Signal network?
From what I know, Signal tries to block known bad clients. But guaranteeing such blocks is impossibly hard short of forcing attestations via things like SafetyNet that would legitimately impact users as well.
There was a case where a teenager in India rose to news media popularity by publishing a messaging app, which was a simple rebranding of Signal he made using some other tool which patches assets iirc.
It was blocked by Signal, but only after reports surfacing about it being an insecure rebrand.
You have to archive messages in some sectors by law, fine. But taking an E2E encrypted app and decrypting and storing the messages in plain text is a brain dead solution.
You get a group of people, say 5, and you generate a Shamirs Secret Split key requiring a minimum of 3 shares to recover, call it the archive key, with each share encrypted to one of those people. You have the modified apps encrypt chat logs every day to a new one time use key, and encrypt that to the Archive key, and upload the encrypted logs somewhere all can access.
Now 3 people in that set of 5 people get a subpoena to disclose logs in a given time period. Each one can consent to using their archive key in an ephemeral secure enclave server to decrypt the daily log keys in the requested date ranged, and decrypt the requested logs.
This way everything is end to end encrypted unless M-of-N people agree to decrypt specific archived logs to comply with a court order.
This shit is not that hard and with the budget of the White House there are 0 excuses for not running a private server and end to end encrypted chat apps with reproducible builds using archive tactics along the lines I just described.
But, I am also not mad at them making public fools of themselves either.
Is Signal allowing arbitrary apps to connect to its network? How do I know that my correspondent is using TM Sgnl or another unofficial app?
Doesn't that break Signal's security guarantees? For example, what if I set my message to delete in 1 hour but TM Sgnl archives it, or some other app simply ignores the retention setting?
If Signal allows it, it seems like a major vulnerability? I suppose I must trust other users - they could always screenshot a conversation. But while I trust them not to intentionally cheat me, I shouldn't have to trust them to accurately evaluate the security implementation of a software application - something most people can't do, Mike Waltz being the most famous example.
Maybe Signal should identify users unofficial clients. A downside is that it would provide significant identifying information - few people use unofficial apps.
> Doesn't that break Signal's security guarantees? For example, what if I set my message to delete in 1 hour but TM Sgnl archives it, or some other app simply ignores the retention setting?
Disappearing messages has never been a security guarantee of Signal. People can always archive things their own way (screenshots in the worst case). It's just a convenience feature, not a security thing.
> Disappearing messages has never been a security guarantee of Signal.
What makes you say that? Has Signal posted something about it?
Retention settings are widely used for messaging security.
Also, I just used retention as an example. There could be many other holes in the unofficial client, including how it communicates with the Signal network. Maybe my messages aren't E2EE when communicating with that client. Maybe the mess up the encrytion implementation.
But also, of course Signal hasn't promised that if they're remotely competent, because that's impossible. You can't stop people from retaining messages if they want to. Now perhaps they're not remotely competent, but in reality they do know better.
> Retention settings are widely used for messaging security.
I mean, maybe people think they're using it for that, but regardless of the context, it will not provide any actual security, because that's impossible! Your recipient could get out a camera and take a photograph if that's what it comes to.
> Your recipient could get out a camera and take a photograph if that's what it comes to.
You are making the perfect the enemy of the good. As I said, two comments up: "I suppose I must trust other users - they could always screenshot a conversation. But while I trust them not to intentionally cheat me, I shouldn't have to trust them to accurately evaluate the security implementation of a software application - something most people can't do, Mike Waltz being the most famous example."
IT security professionals do use retention settings for security; it's not perfect, as you say, but it's very helpful. For example, many businesses auto-delete messages after a certain period except messages that the user intentionally preserves.
And as I said, there are other security functions in Signal that users must trust their apps to handle correctly.
The question is - how do you intend to verify whether an application is official or unofficial? What's stopping the official application to be 'patched' with a fake signature feigning validity?
While that's true even in the general case (through reverse engineering), it's especially true in the case of Signal because it's open source.
There are libraries for interacting with Signal services (one from Signal themselves), here is a CLI tool that uses a patched official library: <https://github.com/AsamK/signal-cli>
If the keys are generated on the device, they can't be trusted by Signal since any clone could generate them too. If the keys are generated by Signal and sent to the device, they can be intercepted and used in any clone
But tl;dr anything said on those phones is assumed to be compromised until proven otherwise by time or a whole lot of very interesting security verifications. So far the evidence that this is a very large leak looks probable based on the evidence presented.
Why do you say "everything said on those phones" - did you mean "on this app"? If the backend of an app was compromised, that wouldn't mean the phone itself was rooted?
It is reasonable to assume that the intelligence services of unfriendly countries are actively devoting significant resources to compromising both issued and personal phones of top-level officials in the US government. They would be negligent not to. It's also a good guess that those efforts would be increased after the first time it became public knowledge the officials were likely using those phones for secret official business.
It is also reasonable to guess that such services have access to malware similar to the infamous Pegasus and a nonzero success rate at deploying it. In short, it's careless to assume none of the phones aren't rooted by a hostile actor.
That's one of several reasons the government has rules requiring that classified conversations take place on specific approved devices which aren't used for anything else.
By installing MDM you’re effectively chaining your security to the security of the MDM. The MDM gives you the ability to install arbitrary code via a blessed backdoor. There’s no reason currently not to suspect that anything said on that phone (signal or not) is compromised.
The MDM admin can do whatever the user can do (or more), sure. So yes the MDM admin can potentially read/hear/see stuff, but everyone knows that. That's not a vulnerability, that's by design.
The compromise is only wrt the admin. Are you claiming the admin itself is compromised? What's the evidence for that?
Is this feigned incompetence. Perhaps a cry for help, or a calculated disclosure?
I can't imagine anyone who would make the mistakes this guy makes, yet here he is; freely using his computer in clear view of a reporter with a camera.
This news story has been strange for me for awhile because on one hand NO our public officials should not be using Signal, but it isn’t because Signal is a bad technology choice. Signal is great. It’s probably the most useable service that’s verifiably secure.
Here is the thing about e2e encrypted messengers: They lock you and your data in and do not allow you control of your life. There is a right to data portability (at least in the eu) that they violate and there is no one fighting for it. Whenever i engage in conversation about this i get empty faces, hostility and vague references to features that are crippled or just don't work at all. There are people and institutions that have to archive the communication centrally and they don't have control over how they are contacted and cannot have conversation about the channel used in every interaction all the time. The solution is to finally force messengers to allow api access to all communication data and then show a sign similar to ssl warnings in browsers to the other side that this user is using an archival api service.
There's a difference between data transport and data hosting. Modern expectations of messengers seem to blur this line and it's better if it's not blurred.
Incidentally: The reason why they blur it is because of 2 network asymmetries prevalent since the 1990's that enforced a disempowering "all-clients-must-go-through-a-central-server model" of communications. Those 2 asymmetries are A) clients have lower bandwidth than servers and B) IPv4 address exhaustion and the need/insistence on NAT. It's definitely not practical to have a phone directly host the pictures posted in its group chats, but it would be awesome if the role of a messaging app's servers was one of caching instead of hosting.
In the beginning though: the very old IRC was clear on this; it was a transport only, and didn't host anything. Anything relating to message history was 100% a client responsibility.
And really I have stuck with that. My primary expectation with messaging apps is message transport. Syncing my message history on disparate devices is cool, and convenient, but honestly I don't really need it in a personal capacity if each client is remembering messages. I don't understand how having to be responsibile for the management of my own data is "less control of my life," it seems like more control. And ... I'm not sure I care about institutional entitlement to archive stuff that is intended to be totally personal.
I understand companies like to have group chats, and history may be more useful and convenient there, but that's why I'm not ever going to use Teams for personal purposes. But I'm not going to scroll back 10 years later on my messaging apps to view old family pictures. I'm going to have those saved somewhere.
> Those 2 asymmetries are A) clients have lower bandwidth than servers and B) IPv4 address exhaustion and the need/insistence on NAT.
There's a third asymmetry: C) power-constrained clients which are asleep most of the time. And this applies not only to battery-powered phones/tablets and laptops, but also to modern desktops which are configured by default to suspend on inactivity.
Molly is a fork of signal that is allowed to access Signals APIs and their APIs are much more open than any other similar service [1] . Signal is not really designed for communicating with people that you don't know in real life such that you can be beyond suspicion that they would be archiving messages but it is basically impossible to monitor if your conversations are being archived if someone is just taking pictures of their phone with another device.
I don't understand this: there's nothing intrinsic to e2e that makes interoperability particularly hard. There are multiple open-source e2e protocols that demonstrate this tidily, and my understanding is that there are governments in the EU that are adopting e.g. Matrix for this reason.
> show a sign similar to ssl warnings in browsers to the other side that this user is using an archival api service.
There is no sound way to do this and there probably never will be, especially if the protocol is interoperable and therefore the user can pick any client they please. The other client can always lie about what it's doing or circumvent detections through analogue means, e.g. pointing a camera at the screen.
If you have interoperability, then you need cipher negotiation between clients with different capabilities (and they will always have different capabilities), and that's a huge, juicy attack surface. Multiple critical SSL/TLS CVEs-- including some we know for a fact the NSA relied on-- came from cipher negotiation.
> If you have interoperability, then you need cipher negotiation between clients with different capabilities (and they will always have different capabilities), and that's a huge, juicy attack surface.
Not really. The degree of malleability in cipher negotiation is widely considered to have been a Bad Move in SSL/TLS's early design, and modern (well-designed) cryptographic protocols don't enable the kinds of parametric malleability that made SSL/TLS so exploitable at the time.
Signal's protocol, for example, is perfectly interoperable; the lack of interoperability comes from a (not unreasonable) constraint at the application layer, not the protocol itself. Another example would be MLS[1], which supports fixed suites rather than parametric malleability and uses the technique from RFC 8701[2] to prevent clients from getting clever and trying to add their own extensions that undermine the fixed suites.
I wonder if they were using it from the start, or if after the first SignalGate, someone scrmabled to find a supplier who could "make their Signal compliant" (which is exactly what TeleMessage/Smarsh are selling)
Installing Signal using this method provides none of the guarantees Signal can normally provide by being an open verifiable application. It not only opens you up to state actors, but also IT folks like us. This is very much tech news. It helps explain why MDM is both critically important for businesses and terrible for security.
I would say, you maintain a blog where you demonstrate your skill and knowledge. As a side effect, I’m pretty lots of people here would be interested to read your debugging, design process, etc :)
Sorry I nuked my comment after realizing it was in the wrong article but I wanted to say I appreciate the response. I’m a decent writer (which is why I think I should probably get around to applying to 0xide) but finding time to blog with a full time job and a kid is hard. Not that that’s an excuse.
The Signal client app is open source; it's probably not reasonable to describe a modified version as "cracked". Signal does discourage the use of modified clients for security reasons, but does not actively block most of them.
More and more I am starting to understand that making money with software really has nothing to do with quality. It's about checking boxes. Enterprise SSO? Check. Auditing? Check. Does it "kinda" do the thing as advertised? Sort of, poorly, and slower than many free open source offerings. Oh, and also the company is in talks for an acquisition, so the entire engineering team is just drawing up plans for their vacation homes and picking out their BMWs at this point, while the product rots. Doesn't matter, here's your eight figure contract so we can tell the SLT we did a thing. By the time enough people have had to deal with it to get rid of it, all the decision makers will have moved on to something else.
What are the visually distinguishing features of this TM SGNL app compared to the official one? To my eyes, the app in the Waltz picture looks the same as the official one.
So this whole app exists because Signal doesn't have a way to archive messages on iPhone. Maybe they should take the hint and see that this is actually something a lot of people would find useful, instead of keeping it the backlog for a decade.
OK, so now a foreign power has dirt on senior US officials as well as operational details about their plans. The first possibility leads to blackmail, the second to defeat, and both to scandal.
As far as I know, there is no mechanism in the US for this to be a scandal or a -gate, or for punitive accountability, due to the parties involved. It will be a bit of a story for a few days then swept under the rug based on precedent.
They took an Israeli app, that is a modified version of signal. the modification BREAKS the one thing signal is excellent at (keeping your messages encrypted so that only the desired endpoints can read them), then distributed it within the US Gov.
This is insanity!
US's enemy's couldn't manufacture a better result themselves!
The messages do need to be recorded in a way that can be read by people other than the intended recipients due to federal record keeping laws. I’m curious if this particular app has been in use for a long time within the government and only recently became a target after it was accidentally revealed in that cabinet meeting photo.
It's not just the US gov - TeleMessage/Smarsh sell to everyone: banks, corporations etc. Their USP is that your employees get to "keep using their apps" but still comply with all the boring data retention stuff - instead of using a dedicated corporate chat app
What's interesting is that they also sell a hacked version of WhatsApp, and the Meta legal team haven't steamrolled them yet
> US's enemy's couldn't manufacture a better result themselves!
in the game of nationalist geopolitics, it's only a matter of time before a current strategic ally becomes an enemy. it's the natural order of nationalism at global scale.
To me the shocking thing about the USA Gov't is that they manage to lose trillions in the defense dept that they can't account for, but somehow are unable to develop their own communications apps? What? Signing messages with a crypto key takes like 4 lines of code. It's not rocket science. Yet they use some corporate app?
My only theory is that they're pretending to have only 'Signal' so that when they want to they can allow hackers to "see" stuff they WANT to be seen. Like a disinformation honey pot designed to misdirect America's enemies. While they actually have a totally separate secret app that is secure and is developed by the NSA.
I heard they use "Signal" as an official app. That blew my mind. Sure they must have others, but why are they even allowed to use commercial apps at all? That's insane.
You can just link the new development in an ongoing story that's already on the front page, just like you did. The alternative would be a second front page thread which splits the discussion and is worse all-round.
That's a fair point, and it's your call - however, if the new (major) development is covered in this way then 1) users on the front page won't see mention of it at headline level and 2) the discussion of that development on HN will be affected by/limited to the time-decay of a post that is 12 hours older. I understand that there are tradeoffs at play, it really comes down to if the development at hand is big-enough to justify another post, and, again, that's your call.
I concur. An analysis of potential risks and vulnerabilities is a different beast from actual proof that the app has indeed been hacked. I call for the other discussion to be restored.
Edit: Wanted to respond to the top-level comment but you get the point.
It's not my call, I'm just explaining how HN typically works. If you want some story handled differently, you should send an email to hn@ycombinator.com. But 'two or more things about the same thing on the fp at the same time' is a big barrier to overcome, it almost never happens.
There is mod commentary on 'people might miss things because of the title' as well, it's mostly 'it's ok for people to click through the story or thread to figure things out' and that's also a fairly longstanding 'how HN works most of the time' thing.
> The data includes apparent message contents; the names and contact information for government officials; usernames and passwords for TeleMessage’s backend panel; and indications of what agencies and companies might be TeleMessage customers.
In August last year I got this from dang when reporting a dead 404 link: "The site 404media.co is banned on HN because it has been the source of too many low-quality posts and because many (most?) of their articles are behind a signup wall."
Not that I've really seen the low quality and the signup requirement doesn't stop other domains. There's quite a few things that originated from 404, so I hope HN gets over whatever it was that annoyed them originally.
The main issue is the (sometimes) hard signup wall. I've been a moderator on HN for longer than 404media has existed, and I know from experience that this changes from time to time or article to article. Other paywalled sites that appear on HN (WSJ, NYT etc) have a porous paywall; you can (almost) always get around it by using an archive site like Archive.today.
If it's a good article (contains significant new information and can be a topic of curious conversation) and a paywall workaround works for that article, we'll happily allow it.
Since HN doesn't really facilitate any workarounds anyway and we've been doing manual archive links and content reposting as needed in other cases... I suspect we can handle 404 as well as a community.
Even porous paywalls can have a marked effect on story performance on HN.
The New York Times tightened its paywall markedly in August 2019, with a net effect that appearances in the top-30 stories on HN's front-page archive (the "Past" links in the site header) fell to ~25% of their previous level.
I'd asked dang at the time if HN had changed any of its own processes at the time. Apparently not.
I suspect then that this reflects frustrations and/or inability to access posted articles behind the paywall.
They used an internal fork delivered via MDM. There are no guarantees that Signal can make about the software running on those phones and per the reports it’s a lot of phones.
The fact that archive link works should make this eligible for unflagging. From tomhow (mod)
> If it's a good article (contains significant new information and can be a topic of curious conversation) and a paywall workaround works for that article, we'll happily allow it.
Anything with a potentially negative impact on Musk, Trump or DOGE seems to get flagged immediately. Coordinated or not it extremely frustrating people flag rather than honestly engage.
I appended a 'd' to the end of the title to pre-empt objections that they're not still using it. If it's known for sure that they are, we can de-'d' that bit.
Edit: this subthread is obsolete now - I took a phrase from the author's update to the article to use as the title above.
honest question, but you decided to go against the "don't change titles" rule to choose one unprovable point until another just as unprovable point is proven? it could be argued both ways with the same argument.
In this case I was thinking of both the 'misleading' and 'linkbait' bits of that 'unless'. (By the way, this is common HN moderation practice—bog standard, as I often say.)
> to choose one unprovable point until another just as unprovable point is proven
You might have a, er, provable point if that were the case! but I'm taking for granted that the officials in question did actually use this client, so "used" is known while "use" (which I took to mean "are still using") isn't yet known for sure. Did I miss something?
Edit: btw, in case anyone's wondering why we left the submitted title up instead of reverting it to what the article says, one reason is that the submitted title struck me as arguably less linkbaity (and therefore ok under the rule) and the other reason is that we cut authors a bit of slack when they post their own work.
the "use" assume nothing happened after the report (app still in managed domain). "used" assume an extra action taking place, which is a stretch imo.
but i assumed wrong that you added the "d", not that you're only exempting the submitter title. thanks for the insight into your always nice moderation.
> 404 Media journalist Joseph Cox published a story pointing out that Waltz was not using the official Signal app, but rather "an obscure and unofficial version of Signal that is designed to archive messages"
Wow. And that's while their entire point of using Signal is to have conversations scrapped after a week to leave no no traces of criminal activity.
Do you think they are using the message archiving version so that they can meet organizational message retention requirements? Maybe they are using signal to ensure they have e2e encrypted messaging on their devices?
There are already government e2e apps. The only reason to use something else is to have selective auto-deletion and/or to use personal devices for official classified data.
Another reason: all of the folks on that group chat have legitimate reasons to have contacts on their phone that would be outside government apps. Foreign leadership. Journalists. Etc.
Signal is likely to be one of the main ways of communicating with those.
It wouldn't actually. The contact in his phone (incorrectly added by Apple AI from a forwarded email) would be the same regardless which app he was using.
Instead, Signal (and this forked version) would have to do its own independent contact management, maybe based on in-person scanning of QR codes plus web-of-trust.
If only it would a- not ask you to access your contacts and b- accept when you say no instead of saying "we'll ask again later" (and then, indeed, asking again later).
Do you have the link to this alleged government-produced e2e software so we can inspect ourselves? I realize they have an incentive to appear incompetent, but surely there must be evidence (further than your testimony) of such gossip popping up somewhere
Are the apps usable? The jargon seems intentionally impenetrable. The editor of that document should be shot every time they used an acronym. Like i get the DOD is a profitable dick to suck but this is just embarrassing for a document intended for the public.
Anyway can you link the source? That's presumably the useful half. The marketing bit doesn't add anything.
I don't care how usable they are, this is the DoD and NSA-approved mechanism for conducting classified conversations and viewing classified data on mobile devices. The adversaries here are other countries who are very good at what they do, security is far more important than convenience.
As for further research, there's plenty online about his programme and these devices. Feel free to Google it yourself. You're asking to be spoonfed.
I don't think it follows that they selected the archiving messenger because they wanted disappearing messages. The whole disappearing messages thing was just internet speculation.
This TM SGNL app is compatible with legit Signal clients and servers.
It’s also possible that they are using this app to archive chats that other parties _believe_ to be disappeared.
In other words, set your chats to disappear in 5 minutes and convince your target to dish some sensitive info. They think it’s off the record, but it’s instantly archived
The counterparty should be naive or stupid to think that whatever they send has no chance to be recorded forever. They should always assume otherwise.
The only interesting use case of disappearing messages is that messages one receives will disappear securely, even if they forget about receiving such messages, or have no access to the device at the time.
this appears to be the most concise answer. TM SGNL provides interop with Signal users in the field, but also includes FOIA archiving.
who manages the archiving service is a general government problem, and less of one for Signal or appointees. NSA should have been operating the archiving service and not a foreign country imo.
What? The point of Signal is not message scraping, but a good E2E encryption. Message scraping is just one feature the app provides that you can turn of if you wish.
Normally I wouldn't link to meta discussion but this was such a weird borderline case that I spent over an hour trying to figure it out. Maybe that makes it interesting.
Edit: in case anyone's confused about the sequence here, micahflee posted the current thread 2 days ago. The timestamp at the top of this page is an artifact of us re-upping it (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
It would be if Google announced Gmail and there was a technical analysis and then it was hacked the same day, I would hope there would be a post for that.
Short version: it's not possible to have separate discussions in the way you describe. They would just get totally blended.
I like your Gmail analogy but I don't think it applies here. The "technical analysis" article is driven by the same political/security concerns as the "hacked" update.
I do feel there's a pattern of me reading some interesting tech news, then thinking "wait, why didn't I see this discussed on HN?", to searching for it and finding a buried/flagged HN discussion due to it being somewhat tied to politics (what isn't?)
It’s like encashing on the trust of Signal protocol, app while breaking its security model so that someone else can search through all messages.
What am I missing here?
OK, say you're a bank. The SEC states you need to keep archives of every discussion your traders have with anyone at any time (I'm simplifying things but you get the point). You keep getting massive fines because traders were whatsapping about deals
So now you've got several options - you can use MS Teams, which of course offers archival, compliance monitoring etc. But that means trusting MSFT, and making sure your traders only use Teams and nothing else. You can use a dedicated application for the financial industry, like Symphony or ICE Chat or Bloomberg, but they're clunkier than B2C apps.
And then the Smarsh (owners of Telemessage) salesman calls you, and says "your users can keep using the apps they love - WhatsApp, Signal - but we make it compliant". And everyone loves it (as long as no-one in your Security or Legal teams are looking too hard at the implications of distributing a cracked version of WhatsApp through your MDM...)
Edit: here's the install document for their cracked WhatsApp binary https://smarsh.my.salesforce.com/sfc/p/#30000001FgxH/a/Pb000...
https://en.wikipedia.org/wiki/SMERSH
These records are encrypted in storage.
The goal of signal is trusted end-to-end encrypted communication. Device/Message security on either end is not in scope for Signals threat model.
If you don't trust the people in your chat, they shouldn't be in your chat.
I assure you, none of these people trust each other. Backstabbing is normal.
They're also likely using it to talk to foreign counterparts. Again, most of whom they don't trust a bit.
Encryption isn't just about "do I trust the recipient".
The trust level required with Signal is, "do I trust the people in this chat not to share the specific communications I am sending to them with some other party whom I do not want to have a copy".
There are many many situations where this level of trust applies that "trust" in the general sense does not apply. It is a useful property.
And if you don't have that level of trust, don't put it in writing.
TM SGNL changes the trust required to, "do I also trust this 3rd party not to share the contents of any of my communications, possibly inadvertently due to poor security practices".
This is a categorical and demonstrably material difference in security model. I do not understand why so many are claiming it is not.
That's the same level of trust really. Signal provides a guarantee that message bearer (i.e. Signal) can't see the contents, but end users may do whatever.
You can't really assume that counterparty's device isn't rooted by their company or they are themselves required by law to provide written transcripts to the archive at the end of each day. In fact, it's publicly known and mandated by law to do so for your counterparty that happens to be US government official.
The people who assume that they are talking with one of the government officials and expect records not to be kept are probably doing (borderline) illegal, like talking treason and bribes.
No, this is not a "nothing to hide argument", because those people aren't sending dickpics in their private capacity.
Because all it takes is one user to decide they trust the third party.
Right now you actually have to do more than trust everyone, you have to trust everyone they trust with their chat history. Which already can include this sort of third party.
Journalist? Taliban negotiator? Ex-wife?
You want to talk to people who want to use Signal, but you yourself don't care about E2E
You trust Telemedia, but not Telegram, or Meta. And you want convenient archiving.
Makes sense. But still debatable if the compliance requirements are acting against the security model or perhaps there are biggest concerns here than just secure communication.
This allows Apple (and the US intelligence community, including FBI/DHS) to surveil approximately 100% of all non-China iMessages in close to realtime (in the usual case where it’s set to backup cross-device iMessage sync keys).
(China, cleverly, requires Apple to not only store all the Chinese iCloud data in China, but also requires that it happen on machines owned and operated by a joint venture with a Chinese-government-controlled entity, keeping them from having to negotiate continued access to the data the way the FBI did.)
https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...
Yet Apple can still legitimately claim that iMessage is e2ee, even though the plaintext is being backed up in a way that is readable to them. It’s a backdoor by another name.
Everyone wins: Apple gets to say E2EE, the state gets to surveil the texts of everyone in the whole country without a warrant thanks to FISA.
But outside of that scenario, is there any advantage to iMessage using e2ee instead of just regular TLS?
Edit: Apparently it's up to you whether you want your iCloud backups to use e2ee. There's an account setting: https://support.apple.com/en-us/102651. Standard protection is a sensible default for regular who aren't tech-savvy, as with e2ee they're at risk of losing all their iCloud data if they lose their key.
If you are the only person out of all the people you correspond with who has ADP enabled, then everyone you correspond with is uploading the plaintext of your messages to Apple.
Effectively nobody has it on. 99%+ of users aren’t even aware of the feature’s existence.
https://daringfireball.net/linked/2023/12/05/icloud-advanced...
You have to remember that there are something like a billion+ iOS users out there. 100 million people have not written down their 27 character alphanumeric account recovery key.
Again, Apple gets to say “we have e2ee, any user who wants it can turn it on” and the FBI gets to read 100% of the texts in the country unimpeded.
If Apple really wanted to promote privacy, they’d have deployed the so-called “trust circle” system they designed and implemented which allowed a quorum of trusted contacts to use their own keys to allow you to recover your account e2ee keys without Apple being able to access it, rolled that out, and then slowly migrated their entire user base over to e2ee backups.
They have not, and they will not, because that will compromise the surveillance backdoor, and get them regulated upon, or worse. The current administration has already shown that they are willing to impose insanely steep tariffs on the iPhone.
You can’t fight city hall, you don’t need a weatherman to know which way the wind blows, etc. The US intelligence community has a heart attack gun. Tim Apple does not.
Separately it is an interesting aside that Apple’s 1A rights are being violated here by the presumptive retaliation should they publish such a migration feature (software code being protected speech).
https://news.ycombinator.com/item?id=43896138
A likely explanation is that the communications director (or the people informing her) wouldn’t know to distinguish between Signal the app, and a Signal compatible app that is nearly indistinguishable from Signal. A lot like Kleenex is a common term for tissue paper regardless of brand.
When the leak was first revealed, there was loud speculation about the legality of government chat messages being set to auto-delete. This additional revelation, about the use of TeleMessage, shows that someone with a security background has actually thought about these things. It makes perfect security sense to archive messages somewhere secure, off phone, for record keeping compliance while ensuring that relatively vulnerable phones don’t retain messages for very long. It’s also an easy explanation for why such an app was created in the first place. There is an obvious market for it.
Only if this his standard govt issued phone. It's also been shown they are also using their own personal phones. The could easily be using unapproved phones some random DOGE'er bought gave them with an MDM setup, without any real oversight.
No. Even if you managed to get the app and push it to devices, you can't just use TM-SGNL without having an archiving account from Telemessage.
Source: I manage this exact setup for several clients.
Why wouldn't the government (DOGE in this scenario) be able to get an archiving account?
edit: found their install doc! https://smarsh.my.salesforce.com/sfc/p/#30000001FgxH/a/Pb000...
We only have evidence they used TeleMessage after the scandal. When the same guy let the press take a photo of his messages with Vance, Rubio, Gabbard and others.
[0]: https://www.404media.co/the-signal-clone-the-trump-admin-use...
Read their install guide and weep at the idea of pushing cracked WhatsApp binaires through MDM https://smarsh.my.salesforce.com/sfc/p/#30000001FgxH/a/Pb000...
On a more meta note, I wonder who even works at companies founded on ideas that are just... bad. On average, I expect good engineers to push back on such business requirements and also have better job mobility so they can leave and work elsewhere. The researcher found the vulnerabilities "in less than 30 minutes" so it seems there's some lack of competence here.
Unfortunately, misguided business requirements like this won't simply disappear and I get that those can be niche offerings that attract juicy contracts.
I wouldn't be surprised if it at least 25% of HN has worked for such companies for at least 2 years of their career.
Every single article is written with the presumption that there are no actual IT people in the White House, that someone wheeled in a Starlink dish on a dessert cart in the yard which is somehow running the entire government. It's silly and ridiculous.
Veterans Affairs actually publishes a list of approved software as part of their Technical Reference Model: https://www.oit.va.gov/services/trm/ (don’t know how complete it is)
But I’m not aware of other agencies doing this. I suppose that VA, given the nature of what they do, likely feels that there is less risk in publicising this information
There’s also the FedRAMP program for centralized review of cloud services - fedramp.gov - I haven’t looked to see if Telemessage is listed as approved but I see some references to FedRAMP and Telemessage online suggesting that it may be
Another source of info is SAM.gov - https://sam.gov/opp/ab5e8a486e074d73bfe09b383ba819ab/view (that’s for NIH) - if there is an agency paying for it, you can assume they’ve approved it for use (or are in the process of doing so) even if they haven’t otherwise publicly said they are. But, not all contracts are public, so just because you can’t find it on SAM.gov doesn’t mean it doesn’t exist
As is putting someone with a brain parasite and anti-vax beliefs as the head of HHS, but here we are.
“Silly and ridiculous” does not mean “implausible” with this administration. It’s the standard.
That situation was ridiculous, in that to score the marketing points, but fighting with the whitehouse IT the starlink is installed at a remote location with much the same point of failure as their fibre services.
They were the party of young-earth creationists, religious pro-lifers, climate-deniers and gun-lovers - but also of educated fiscally conservative folks. The party would welcome economics professors and leaders of medium-sized businesses, promising no radical changes, no big increases in spending or regulation, and a generally pro-market/pro-business stance.
The genius of Trump was in realising the educated fiscally conservative folk were driving 95% of the republican policy agenda but only delivering 10% of the votes. The average Republican voter loves the idea of disbanding the IRS and replacing all taxes with tariffs on imports. Sure, you lose the educated 10% who think that policy is economic suicide - but you can more than make up for it with increased turn-out from the other 90% who are really fired up by the prospect of eliminating all taxes.
And it works - jumping into the anti-intellectual camp with both feet has delivered the house, the senate, the presidency (electoral college and popular vote), and the supreme court.
The conservative movement has a brain-drain because they've realised they don't want the votes of smart, educated people.
My point is that you have to distinguish between arguing against the output of the intellectual activity and arguing against the intellectual activity taking place.
Source: I'm the admin who installs TM-SGNL for many users.
https://github.com/signalapp/Signal-Server
So... is it properly open source?
His repo, not theirs: https://github.com/micahflee/TM-SGNL-Android/commits/master/
He points out that "You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy."
No. Just no. Anyone who has handled TS information would know how nutz that sounds. Irrespective of software, TS stuff is only ever displayed in special rooms with big doors and a man with a gun outside. The concept of having TS on an everyday-use cellphone is just maddening.
More specifically, the device Obama was given was a Sectéra Edge [0][1] by General Dynamics, a device specifically designed to be able to operate on Top Secret voice and Secret data networks. It had hardware-level separation between the unclassified and classified sides, even having separate flash memory for both. [2]
The NSA contributed to the design and certified it and another device (L3's Guardian) on the SCIP, HAIPE, Suite A/B, Type 1, and non-Type 1 security protocols.
It was absolutely not a regular BlackBerry, it didn't run any RIM software, no data ever went through RIM's servers, and secure calls were encrypted and didn't use SS7. It was a clunky purpose-designed device for the entire US government to be able to access Secret information and conduct Top Secret voice calls on the go.
Even then, there were limitations to when and where it could be used and when a SCIF was required.
The current equivalent of the SME-PED programme is the DoD's Mobility Classified Capability[3], which are specially customised smartphones again made by General Dynamics.
There is no excuse whatsoever for the current administration's use of Signal, let alone TeleMessage Signal, for Secret and Top Secret discussions on regular consumer and personal devices. It's deeply irresponsible and worse than any previous administration has done.
[0] https://www.cnet.com/tech/tech-industry/obamas-new-blackberr...
[1] https://gdmissionsystems.com/discontinued-products/sectera-e...
[2] https://apps.dtic.mil/sti/tr/pdf/ADA547816.pdf
[3] https://www.disa.mil/~/media/files/disa/fact-sheets/dmcc-s.p...
The photo attached to the article captioned "President-elect Barack Obama checks his BlackBerry while riding on his campaign bus in Pennsylvania last March." appears to show a blackberry.
I take it from the article that this was as controversial as I remember it being at the time. Thanks for posting it.
The Blackberry for personal use wasn't a stock BlackBerry, but hardened by the NSA and fitted with the SecurVoice software package to encrypt voice calls, emails, and messages. The few people he had on his approved communication list were given the same devices.[1]
That BlackBerry was, again, not used for classified communication. So it's not the same thing as the current scandal.
[0] https://www.spokesman.com/stories/2009/jan/24/obamas-other-p...
[1] https://www.wired.com/2009/04/obama-to-get-back-blackberry-a...
Presence of the senior staff on his (very limited) contact list would seem to contradict that statement. Communication with them would be, by definition, not personal.
I agree with you that our government officials should be using the secure infrastructure our patriotic service members and civil servants work so hard to build and maintain.
What's clear is that NSA put a fair amount of effort into securing and maintaining that system, so much that its use was limited to the White House; Hillary Clinton wanted a similar setup (her predecessor, Condoleezza Rice, had been allowed to use unaltered "off the shelf" Blackberries under an NSA waiver, but NSA had declined to renew those waivers due to security concerns), but NSA slow-walked and effectively derailed the discussions with State's security team, perhaps because they wanted to limit the amount of technical detail discussed outside the White House, or because they were concerned that State would be unable to provide SecState with the kind of technical support necessary to secure the devices during global travel. (We all know what happened next, of course.)
So, again, it’s not a parallel to the current situation. Nobody is saying the SecDef and other staff shouldn’t have unclassified devices as well as their classified devices, the issue is that they’ve been using the unclassified devices to conduct Secret or Top Secret discussions.
Secretary of Defence Hegseth sent Secret or Top Secret information over a channel (Signal/TM Signal and a regular mobile phone) that was never cleared for classified communications. The person I was replying to was trying to equate Obama's actions to those of Hegseth (and Waltz and others), I was providing context showing that to be a false equivalence.
What Hegseth did was indefensible.
It has been eight years since Obama's presidency, had there been any use of this hardened BlackBerry for classified communications it would have emerged by now. Similarly, all messages on that device were subject to the Presidential Records Act, and are archived by NARA. You can FOIA them if you want to.
There were also no claims made during his administration that he ignored security protocols. Even his insistence on retaining a BlackBerry for unclassified communications was done through a compromise and an NSA-hardened device, not by ignoring the rules.
Similarly, how do we know that Reagan didn't hold cleartext phone calls with his aides on the Top Secret plans to contain the USSR? We don't, but in the absence of any supportive evidence over the years it's safe to assume he did not.
(I would have replied to him directly, but the comments have since been [appropriately] flagged)
In reality, no argument could ever be made if you had to prove the negative of every argument. Some other common applications of this fallacy off the top of my head:
"Well we don't have proof that children weren't trafficked in Comet Pizza, so it's proof that it did actually happen."
"We don't have proof that no kids used litterboxes at school, so it's proof that they did use litterboxes."
[0] https://en.m.wikipedia.org/wiki/Argument_from_ignorance
Whatever the reason, I have made my case. Feel free to make yours with a similar level of evidence.
> false equivalence
We're literally talking about people occupying the same positions. If anything, blackberry seems less secure. For instance, there's a global en/decryption key, and it's known: https://www.vice.com/en/article/exclusive-canada-police-obta...
1) you don't have any evidence that he used it for TS and are just trying to make a false equivalence.
2) you think secdef and potus occupy the same position.
Got it.
(I feel like I have to say this in every thread that insinuates something sinister about being a "former Israeli intelligence officer": the structure of Israel's military and mandatory service is such that just about everybody with technical skills serves in some kind of "intelligence" capacity. It's not a very big country. This is, of course, independent from any normative claims about Israel's government, politics, etc. -- it's what you'd expect in any small country that has mandatory military service with a significant intelligence component.)
Today they may collide in most instances, who's to say tomorrow it will still be the case. For instance when Iran gets the nuclear bomb and threaten Israel with it ?
An encrypted messaging system, used by the American government, is in my opinion even worst than the supposed Huawei 5g antenna data collection.
Huawei wouldn't have had access to secret talk between top government official, at least not decrypted.
Correct - they would not use that intelligence to threaten that relationship, but to maintain it. Knowing the political leanings of politicians and government officials (for example, identifying any that think that relationship is more of a cost than a benefit) is extremely valuable to that end.
(It also beggars belief in the current climate -- I would be hard-pressed to name a single member of the current administration who hasn't yelled until purple in the face about their support for Israel's current government and wartime policies.)
[1] After House Speaker Mike Johnson Pushed Through Israel Aid Package, AIPAC Cash Came Flowing In - https://theintercept.com/2024/01/20/israel-aipac-house-mike-...
[2] The Israel lobby and U.S. foreign policy - https://www.hks.harvard.edu/publications/israel-lobby-and-us...
[3] More than 95% of AIPAC-backed candidates won their election last night! Being pro-Israel is good policy and good politics! - https://x.com/AIPAC/status/1590362232915132417
There's a reason the US bought this app from Israelis, and it wasn't because of improved security or archive compliance.
For how much they like to beat the "buy American" drum, this contradicts that.
https://en.wikipedia.org/wiki/Jonathan_Pollard
[1]: https://www.thedailybeast.com/israeli-spies-arent-exactly-re...
Periodically, someone gets caught red-handed, a fuss is made, some diplomats get thrown out and replaced with other ones, and then everyone continues doing it.
While some of the points you make are indeed correct it actually paints an inaccurate overall picture.
For example: not widely known but 100% true, Israel is and has been for a long time classified as the highest level of counterintelligence threat to the US on par with China, Russia, Cuba and others.
I assure you, this is a big fucking deal and not something to be waved away with “everyone’s intel, don’t worry it’s probably nothing”.
I'm saying that the fact that it's Israeli tech is not itself the biggest part of the story.
(A piece of context that's often missing from - typically charged - discussions about US/Israel relationships is the degree of dependence between the two, and how that's varied over the years. Israel's defense policies have historically been informed by a desire to be fully self-sufficient during wartime, i.e. not require active support from countries like the U.S. That policy has been deprioritized over the last 20-30 years, to the point where the US is now a significant active defense provider for Israel, rather than just an arms supplier. This is a dependency relationship that's new to the ongoing conflict, and should color any analysis of Israel's willingness to do things that would threaten its relationship with the U.S.)
I guess US gov would not like to have it be out publicly, but they must understand that this is being at least attempted and US likely does it to Israel, too.
https://www.timesofisrael.com/new-nsa-document-highlights-is...
My point here is pretty narrow: I'm sure Israel spies on the US, and we spy on them. My only doubt is whether TM SGNL itself is an element of that, or whether it's just another flavor of junk software sold to USG to paper over the gaps between technology and compliance requirements.
I mean, they stole weapons grade Uranium from United States along with nuclear secrets and we just shrugged our shoulders: https://www.theguardian.com/world/2014/jan/15/truth-israels-...
Well, guess what, it doesn't work. It's just stupid.
I would hope that any message archiving is being done on an organization-owned server though.
Yes, tools like Cellebrite and zero-day exploits.
Those are tools which are used to spy on people outside of the government.
This is a tool that has data created by the government.
There's compelling evidence that the messages all pass through TM servers before being archived.
https://www.404media.co/the-signal-clone-the-trump-admin-use...
The question is where the E2E encryption goes between.
The builds that are distributed would likely just send the plaintext un-encrypted message separately to the archive, and I'm guessing that means it goes right to TM servers before being dispatched elsewhere.
But is this really just evidence that a mandatory draft is actually good economic policy? Having a forced networking event where a bunch of similar skilled individual meet each other seems to be producing a ton of economic value for Israel.
I more or less agree.
> We’re literally an occupied nation
The language of the US under occupation is a neonazi talking point, ZOG (Zionist Occupation Government) being a phrase neonazi morons like. Maybe a coincidence.
The article states that "at least one line of code must've been added" to support such a feature, which I believe to be an honest and accurate assessment.
> One line
This can also be a single JMP and RTS statement, to a function that makes a screenshot, or something that takes the message.
No technical analysis of a working application has been performed. Just speculation of how this could work. I am not saying Micah is wrong. I just hoped more was available, so an actual disassemble was possible.
I would speculate that they did not recompile from source, but used the same process as used by the other applications. Intrusive by modification of the code execution, by injection, etc. That is speculation from my end, but reuses similar approaches across all of their applications.
There was a case where a teenager in India rose to news media popularity by publishing a messaging app, which was a simple rebranding of Signal he made using some other tool which patches assets iirc.
It was blocked by Signal, but only after reports surfacing about it being an insecure rebrand.
You get a group of people, say 5, and you generate a Shamirs Secret Split key requiring a minimum of 3 shares to recover, call it the archive key, with each share encrypted to one of those people. You have the modified apps encrypt chat logs every day to a new one time use key, and encrypt that to the Archive key, and upload the encrypted logs somewhere all can access.
Now 3 people in that set of 5 people get a subpoena to disclose logs in a given time period. Each one can consent to using their archive key in an ephemeral secure enclave server to decrypt the daily log keys in the requested date ranged, and decrypt the requested logs.
This way everything is end to end encrypted unless M-of-N people agree to decrypt specific archived logs to comply with a court order.
This shit is not that hard and with the budget of the White House there are 0 excuses for not running a private server and end to end encrypted chat apps with reproducible builds using archive tactics along the lines I just described.
But, I am also not mad at them making public fools of themselves either.
Doesn't that break Signal's security guarantees? For example, what if I set my message to delete in 1 hour but TM Sgnl archives it, or some other app simply ignores the retention setting?
If Signal allows it, it seems like a major vulnerability? I suppose I must trust other users - they could always screenshot a conversation. But while I trust them not to intentionally cheat me, I shouldn't have to trust them to accurately evaluate the security implementation of a software application - something most people can't do, Mike Waltz being the most famous example.
Maybe Signal should identify users unofficial clients. A downside is that it would provide significant identifying information - few people use unofficial apps.
Disappearing messages has never been a security guarantee of Signal. People can always archive things their own way (screenshots in the worst case). It's just a convenience feature, not a security thing.
What makes you say that? Has Signal posted something about it?
Retention settings are widely used for messaging security.
Also, I just used retention as an example. There could be many other holes in the unofficial client, including how it communicates with the Signal network. Maybe my messages aren't E2EE when communicating with that client. Maybe the mess up the encrytion implementation.
I mean, if you want Signal's blog post where they introduced it, it's here: https://signal.org/blog/disappearing-messages/
But also, of course Signal hasn't promised that if they're remotely competent, because that's impossible. You can't stop people from retaining messages if they want to. Now perhaps they're not remotely competent, but in reality they do know better.
> Retention settings are widely used for messaging security.
I mean, maybe people think they're using it for that, but regardless of the context, it will not provide any actual security, because that's impossible! Your recipient could get out a camera and take a photograph if that's what it comes to.
You are making the perfect the enemy of the good. As I said, two comments up: "I suppose I must trust other users - they could always screenshot a conversation. But while I trust them not to intentionally cheat me, I shouldn't have to trust them to accurately evaluate the security implementation of a software application - something most people can't do, Mike Waltz being the most famous example."
IT security professionals do use retention settings for security; it's not perfect, as you say, but it's very helpful. For example, many businesses auto-delete messages after a certain period except messages that the user intentionally preserves.
And as I said, there are other security functions in Signal that users must trust their apps to handle correctly.
People have been requesting various changes to this feature for years, but hear crickets from Signal.
There are libraries for interacting with Signal services (one from Signal themselves), here is a CLI tool that uses a patched official library: <https://github.com/AsamK/signal-cli>
But tl;dr anything said on those phones is assumed to be compromised until proven otherwise by time or a whole lot of very interesting security verifications. So far the evidence that this is a very large leak looks probable based on the evidence presented.
It is also reasonable to guess that such services have access to malware similar to the infamous Pegasus and a nonzero success rate at deploying it. In short, it's careless to assume none of the phones aren't rooted by a hostile actor.
That's one of several reasons the government has rules requiring that classified conversations take place on specific approved devices which aren't used for anything else.
The compromise is only wrt the admin. Are you claiming the admin itself is compromised? What's the evidence for that?
I can't imagine anyone who would make the mistakes this guy makes, yet here he is; freely using his computer in clear view of a reporter with a camera.
Incidentally: The reason why they blur it is because of 2 network asymmetries prevalent since the 1990's that enforced a disempowering "all-clients-must-go-through-a-central-server model" of communications. Those 2 asymmetries are A) clients have lower bandwidth than servers and B) IPv4 address exhaustion and the need/insistence on NAT. It's definitely not practical to have a phone directly host the pictures posted in its group chats, but it would be awesome if the role of a messaging app's servers was one of caching instead of hosting.
In the beginning though: the very old IRC was clear on this; it was a transport only, and didn't host anything. Anything relating to message history was 100% a client responsibility.
And really I have stuck with that. My primary expectation with messaging apps is message transport. Syncing my message history on disparate devices is cool, and convenient, but honestly I don't really need it in a personal capacity if each client is remembering messages. I don't understand how having to be responsibile for the management of my own data is "less control of my life," it seems like more control. And ... I'm not sure I care about institutional entitlement to archive stuff that is intended to be totally personal.
I understand companies like to have group chats, and history may be more useful and convenient there, but that's why I'm not ever going to use Teams for personal purposes. But I'm not going to scroll back 10 years later on my messaging apps to view old family pictures. I'm going to have those saved somewhere.
There's a third asymmetry: C) power-constrained clients which are asleep most of the time. And this applies not only to battery-powered phones/tablets and laptops, but also to modern desktops which are configured by default to suspend on inactivity.
[1] https://github.com/mollyim/mollyim-android
> show a sign similar to ssl warnings in browsers to the other side that this user is using an archival api service.
There is no sound way to do this and there probably never will be, especially if the protocol is interoperable and therefore the user can pick any client they please. The other client can always lie about what it's doing or circumvent detections through analogue means, e.g. pointing a camera at the screen.
Not really. The degree of malleability in cipher negotiation is widely considered to have been a Bad Move in SSL/TLS's early design, and modern (well-designed) cryptographic protocols don't enable the kinds of parametric malleability that made SSL/TLS so exploitable at the time.
Signal's protocol, for example, is perfectly interoperable; the lack of interoperability comes from a (not unreasonable) constraint at the application layer, not the protocol itself. Another example would be MLS[1], which supports fixed suites rather than parametric malleability and uses the technique from RFC 8701[2] to prevent clients from getting clever and trying to add their own extensions that undermine the fixed suites.
[1]: https://datatracker.ietf.org/doc/rfc9420/
[2]: https://www.rfc-editor.org/rfc/rfc8701.html
Unfortunately this Israeli company is just incompetent, should try something from Russia next time, given that’s all the data end up to be anyway.
also keeping government honest and open is also very libertarian. covering all fronts.
Screenshot of previous version: https://0x0.st/8Jqf.png
They took an Israeli app, that is a modified version of signal. the modification BREAKS the one thing signal is excellent at (keeping your messages encrypted so that only the desired endpoints can read them), then distributed it within the US Gov.
This is insanity!
US's enemy's couldn't manufacture a better result themselves!
What's interesting is that they also sell a hacked version of WhatsApp, and the Meta legal team haven't steamrolled them yet
in the game of nationalist geopolitics, it's only a matter of time before a current strategic ally becomes an enemy. it's the natural order of nationalism at global scale.
My only theory is that they're pretending to have only 'Signal' so that when they want to they can allow hackers to "see" stuff they WANT to be seen. Like a disinformation honey pot designed to misdirect America's enemies. While they actually have a totally separate secret app that is secure and is developed by the NSA.
https://www.404media.co/the-signal-clone-the-trump-admin-use...
It was marked as a DUPE of this discussion, despite being a major new development https://news.ycombinator.com/item?id=43890034 Hopefully that decision can be reconsidered
Edit: Wanted to respond to the top-level comment but you get the point.
There is mod commentary on 'people might miss things because of the title' as well, it's mostly 'it's ok for people to click through the story or thread to figure things out' and that's also a fairly longstanding 'how HN works most of the time' thing.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
The operating assumption here is that people are smart enough to follow the developments in the story themselves - in the the thread and outside.
It's insane that this isn't front page news. This takes the original Signalgate breach to an order of magnitude higher level of severity.
Not that I've really seen the low quality and the signup requirement doesn't stop other domains. There's quite a few things that originated from 404, so I hope HN gets over whatever it was that annoyed them originally.
If it's a good article (contains significant new information and can be a topic of curious conversation) and a paywall workaround works for that article, we'll happily allow it.
The New York Times tightened its paywall markedly in August 2019, with a net effect that appearances in the top-30 stories on HN's front-page archive (the "Past" links in the site header) fell to ~25% of their previous level.
I'd asked dang at the time if HN had changed any of its own processes at the time. Apparently not.
I suspect then that this reflects frustrations and/or inability to access posted articles behind the paywall.
See: <https://news.ycombinator.com/item?id=36918251> (July 2023)
Why are these being instantly marked as dead?
See https://news.ycombinator.com/item?id=43891088 in which a user reports that moderator dang said why that happens for this domain.
> If it's a good article (contains significant new information and can be a topic of curious conversation) and a paywall workaround works for that article, we'll happily allow it.
Edit: this subthread is obsolete now - I took a phrase from the author's update to the article to use as the title above.
"Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html
In this case I was thinking of both the 'misleading' and 'linkbait' bits of that 'unless'. (By the way, this is common HN moderation practice—bog standard, as I often say.)
> to choose one unprovable point until another just as unprovable point is proven
You might have a, er, provable point if that were the case! but I'm taking for granted that the officials in question did actually use this client, so "used" is known while "use" (which I took to mean "are still using") isn't yet known for sure. Did I miss something?
Edit: btw, in case anyone's wondering why we left the submitted title up instead of reverting it to what the article says, one reason is that the submitted title struck me as arguably less linkbaity (and therefore ok under the rule) and the other reason is that we cut authors a bit of slack when they post their own work.
but i assumed wrong that you added the "d", not that you're only exempting the submitter title. thanks for the insight into your always nice moderation.
follow up question: you work seven days a week??
> you work seven days a week??
By no means all day every day, but yes in the sense that my hours get distributed semi-randomly.
dang seems to be saying that he did add the “d” though?
FWIW I would have preferred it to be just left as “uses” per the article title.
Wow. And that's while their entire point of using Signal is to have conversations scrapped after a week to leave no no traces of criminal activity.
Signal is likely to be one of the main ways of communicating with those.
Instead, Signal (and this forked version) would have to do its own independent contact management, maybe based on in-person scanning of QR codes plus web-of-trust.
Some of the apps are listed in that brochure.
There's no excuse for using Signal on personal devices for classified conversations.
[0] https://www.disa.mil/~/media/files/disa/fact-sheets/dmcc-s.p...
Anyway can you link the source? That's presumably the useful half. The marketing bit doesn't add anything.
As for further research, there's plenty online about his programme and these devices. Feel free to Google it yourself. You're asking to be spoonfed.
> Waltz set some of the messages in the Signal group to disappear after one week
https://www.theatlantic.com/politics/archive/2025/03/trump-a...
https://www.nytimes.com/2025/04/15/us/politics/cia-director-...
It’s also possible that they are using this app to archive chats that other parties _believe_ to be disappeared.
In other words, set your chats to disappear in 5 minutes and convince your target to dish some sensitive info. They think it’s off the record, but it’s instantly archived
The only interesting use case of disappearing messages is that messages one receives will disappear securely, even if they forget about receiving such messages, or have no access to the device at the time.
who manages the archiving service is a general government problem, and less of one for Signal or appointees. NSA should have been operating the archiving service and not a foreign country imo.
I wonder what the people he communicated with knew / thought?