> This is exactly what Apple Intelligence should have been... They could have shipped an agentic AI that actually automated your computer instead of summarizing your notifications. Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar by actually using your apps, not through some brittle API layer that breaks every update.
And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.
Apple's niche product, consisting of like 1-4% of computer sales compared to its dominant MacBook line, is now flying off the shelf as a highly desired product, because of a piece of software that Apple didn't spend a dime developing. This sounds like a major win for Apple.
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
There are few open source projects coming along that let you sell your compute power in a decentralized way. I don't know how genuine some of these are [0] but it could be the reason: people are just trying to make money.
Mac minis, but they’re only flying off the shelves for the same reason that folks are forced to use iPhones if they want to date: fear of the dreaded green bubble.
(Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
I assume the suggestion is that they need to run their bot on a machine that's up 24x7 (and they don't want to do that with a laptop since they probably carry it places and such), AND they want it to manage their texts by interacting with the Mac version of the Messages app.
But if you connect those dots you've got people trying to date by having an AI respond to texts from potential dates which seems like you're immediately in red-flag-city and good luck keeping that secret for long enough to get whatever it is you want.
No it’s like someone owning a Ferrari and looking down on someone who drives a Corolla. Or that’s how they see it, anyway. Plus there’s the annoyance with interoperability: it’s not just about status, it’s about all your iMessage group chats that don’t play nice with android
iMessage lock in is a huge thing. When it was new and was still e2ee I ended up buying iPhones for everyone I regularly messaged.
These days it is insecure however because they backdoored the e2ee and kept it backdoored for the FBI, so now Signal is the only messenger I am reachable on.
Blue bubble snobbery is presently a mark of ignorance more than anything else.
(Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
If someone is shallow enough to write you off for that, is that someone you want as your partner?
> ...Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
Best privacy in computers, ADP, and M-series chips mean nothing to you? To me, Apple is the last bastion of sanity in a world where user hostility is the norm.
Their software efforts have little ambition. Tweaks and improvements are always a good idea, but without some ambitious effort, nothing special is learned or achieved.
A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".
The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
Great hardware. Interesting, but locked down software.
If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.
$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)
Not just lack of ambition, lack of vision or taste. Liquid Glass is a step back in almost every way, that it got out the door is an indictment of the entire leadership chain.
> It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.
On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.
Maybe the second best would be some kind of Wayland compositor.
I mean they literally just looked at Tile. And they have the benefit of running the platform. Demonstrates time and time again that they engage in anticompetitive behaviour.
No, they didn't just look at Tile. The used a completely new UWB radio technology with a completely new anonymization cryptographic paradigm allowing them to include every single device in network, transparently.
AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.
It would be an absolute disaster at Apple scale. Millions of people would start using it, filing incorrect taxes or deleting their important files and Apple would be sued endlessly.
Tiny open source projects can just say "use at your own risk" and offload all responsibility.
>> Imagine if Siri could genuinely file your taxes
Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!
>> respond to emails
If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)
This is generally true only of them going to market with new (to them) physical form factors. They aren’t generally regarded as the best in terms of software innovation (though I think most agree they make very beautiful software)
Personal intelligence, the (awkward) feature where you can take a screenshot and get Siri to explain stuff, and the new spotlight features where you can type out stuff you want to do in apps probably hints at that…
> And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
In that list of Apple products that you own, do none of them match the ops comment? You’re saying none of those products are or have been in their time in the market a perfected version of other things?
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
Perhaps I’m misremembering, but I feel sure that Siri was much better a decade ago than it is today. Basic voice commands that used to work are no longer recognised, or required you to unlock the phone in situations where hands free operation is the whole point of using a voice command.
There were certain commands that worked just fine. But they, in Apple's way, required you to "discover" what worked and what didn't with no hints, and then there were illogical gaps like "this grouping should have three obvious options, but you can only do one via Siri".
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
> Then Apple can integrate it with hardware and software in a way no other company can.
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
Can you understand how this commoditizes applications? The developers would absolutely have a fit. There is a reason this hasn’t been done already. It’s not lack of understanding or capability, it’s financial reality. Shortcuts is the compromise struck in its place.
> Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar
> And this is probably coming, a few years from now.
Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...
Apple literally lives on the "Cutting Edge" a-la XKCD [1]. My wife is an iPerson and she always tells me about these new features (my phone has had them since $today-5 years). But for her, these are brand new exciting things!
How many chat products has Google come out with? Google messenger, buzz, wave, meet, Google+, hangouts… Apple has iMessage and FaceTime. You just restated OP’s point. Apple evolves things slowly and comes to market when the problems have already been solved in a myriad of ways, so they can be solved once and consistently. It’s not about coming to market soonest. How did you get that from what OP said?
Pointless argument given that android isn't just "android". Never has been.
It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
I imagine in a few years our phone will become our AI assistant, locally and cloud powered, that understand us deeply. And Apple will release a human robot, loaded with the same intelligence in the phone to become our home assistant or companion. But first Apple needs to allow us to rename our phone agent/helper other than Siri.
> I suspect ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
Ten years from now, the agent layer will be the interface the majority of people use a computer through. Operating systems will become more agentic and absorb the application layer while platforms like Claude Cowork will try to become the omniapp. They’ll meet in the middle and it will be like Microsoft trying to fight Netscape’s view of the web as the omniapp all over again.
Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.
I hope so. We're right on the cusp of having computers that actually are everything we ever wanted them to be, ever since scifi started describing devices that could do things for us. There's just a few pesky details left to iron out (who pays for it, insane power demand, opaque models, non-existent security, etc etc).
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
I think you are right. In fact, if were a regular office worker today, a Claude subscription could possibly be the only piece of software you might need to open for days in a row to be productive. You can check messages, send messages, modify documents, create documents, do research, and so on. You could even have it check on news and forums for you (if they could be crawled that is).
If you're arguing that in 10 years we won't have fully automated systems where we interact more with the automation than the functionality, I've got news for you...
I feel like I’m watching group psychosis where people are just following each other off a cliff. I think the promise of AI and the potential money involved override all self preservation instincts in some people.
It would be fine if I could just ignore it, but they are infecting the entire industry.
I had a dark thought today, that AI agents are going to make scam factory jobs obsolete. I don’t think this will decrease the number of forced labor kidnappings though, since there are many things AI agents will not be good at.
people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-4, or whatever model you want to actually control your computer—has become the killer app for Mac hardware
That makes little sense. Buying mac mini would imply for the fused v-ram with the gpu capabilities, but then they're saying Claude/GPT-4 which don't have any gpu requirements.
Is the author implying mac minis for the low power consumption?
If you’re heavily invested in Apple apps (iMessage/Calendar/Reminders/Notes), you need a Mac to give the agent tools to interact with these apps. I think that combined with the form factor, price, and power consumption, makes it an ideal candidate.
If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.
Some of those connectors are only available on the mac and some only on the iPhone. Like notes is available on the mac, but not on the phone. Vice versa for reminders.
The software can drive the web browser if you install the plugin. My knowledge is 1.5 weeks old, so it might be able to drive the whole UI now, I don't know.
It is absurd enough of a project that everybody basically expects it to be secure, right? It is some wild niche thing for people who like to play with new types of programs.
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
I think a lot of people have been spoiled (beneficially) by using large, professionally-run SaaS services where your only serious security concerns were keeping your credentials secret, and mitigating the downstream effects of data breaches. I could see having a fundamentally different understanding of security having only experienced that.
What people are talking about doing with OpenClaw I find absolutely insane.
> What people are talking about doing with OpenClaw I find absolutely insane.
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
Apple had problems with just the Chatbot side of LLMs because they couldn't fully control the messaging. Add in a small helping of losing your customers entire net worth and yeah. These other posters have no idea what they are talking about.
Exactly, Apple is entirely too conservative to shine with LLMs due to their uncontrollability, Apple likes their control and their version of "protecting people" (which I don't fully agree with) which includes "We are way too scared to expose our clients to something we can't control and stop from doing/saying anything bad!", which may end up being prudent. They won't come close to doing something like OpenClaw for at least a few more years when the tech is (hopefully) safer and/or the Overton Window has shifted.
And yet they'll push out AI-driven "message summaries" that are horrifically bad and inaccurate, often summarizing the intent of a message as the complete opposite of the full message up to and including "wants to end relationship; will see you later"?
Was about to point out the same thing. Apple's desperate rush to market, summarising news headlines badly and sometimes just plain hallucinating stuff causing many public figured to react when they end up the target of such mishaps.
Clawdbot/Moltbot/OpenClaw is so far from figuring out the “trust” element for agents that it’s baffling the OP even chose to bring it up in his argument
this seems obviously true, but at the same time very very wrong. openclaw / moltbot / whatever it's called today is essentially a thought experiment of "what happens if we just ignore all that silly safety stuff"
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
Apple has a lot of power over the developers on its platforms. As a thought experiment let's say they did launch it. It would put real skin in the game for getting security right. Who cares if a thousand people using openclaw. Millions of iOS users having such an assistant will spur a lot of investment towards safety.
>It would put real skin in the game for getting security right.
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
The notion that if it is good then the big-ones should have done it is the complete opposite of innovation, startups and entrepreneurial culture.
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
The OpenClaw concept is fundamentally insecure by design and prompt injection means it can never be secure.
If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.
After having spent a few days with OpenClaw I have to say it’s about the worst software I’ve worked with ever. Everyone focused on the security flaws but the software itself is barely coherent. It’s like Moltbook wrote OpenClaw wrote Moltbook in some insidious wiggum loop from hell with no guard rails. The commit rate on the project reflects this.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
Because people are forced to buy them. Same as how datacenters are full of mac minis to build iOS apps that could easily be built on any hardware if Apple weren't such corporate bastards.
I don't think it's hardware differentiation as much as vendor lock in because it lets people send iMessages with their agent. Not sure about the running local models on it though.
Given that OpenClaw isn’t a lot of code, Apple could still build their own. After all, a hyper-personal AI Assistant is what they announced as “Apple Intelligence” two WWDCs ago. Or the could buy OpenClaw, hand it to the Shortcuts team, throw in their remaining AI devs, and Bob’s your uncle. They aren’t first to OpenClaw, but maybe they can still be the best. I know I’d like to be sure it can’t erase my entire disk just because i sneeze when I’m telling it what to do.
> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
My opinion is it seems counter to what made Apple so successful in the first place: second mover advantage, see where everyone else fails and plug the gap.
You're right on the liability front - Apple still won because everyone bought their hardware and their margins are insanely good. It's not that they're sitting by waiting to become irrelevant, they're playing the long game as they always do.
You don’t look at it, you just talk to it and it can talk back to you. It’s more just having a conversation with a personal assistant while driving. Which is a pretty common thing to do.
This post completely has it backwards, people are buying Apple hardware because they don't shove AI down everyone's throat unlike microsoft. And in a few weeks OpenClaw will be outdated or deemed too unsecure anyways, it will never be a long-term products, it's just some crazy experiment for the memes.
Apple has a very low tolerance for reputional liabilities. They aren't going to roll out something that %0.01 of the time does something bad, because with 100M devices that's something that'll affect 10,000 people, and have huge potential to cause bad PR, damaging the brand and trust.
Apparently APIs are now a brittle way for software to use other software and interpreting and manipulating human GUIs with emulated mouse clicks and keypresses is a much better and perfectly reasonable way to do it. We’re truly living in a bizarro timeline.
> However this does not excuse Apple to sit with their thumbs up their asses for all these years.
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
In terms of useful AI agents, Siri/Apple Intelligence has been behind for so long that no one expects it to be any good.
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
This article is talking about the AI race as if it’s over when it’s only started. And really, an opinion of the entire market based on a few reddit posts?
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
This is Yellow Pages type thinking in the age of the internet. No one is going to own an agentic layer (list any of the multitude of platforms already irrelevant like OpenAI Agent SDK, Google A2A) . No one is going to own a new app store (GPTs are already dead). No one is going to foundation models (FOSS models are extremely capable today). No one is going to own inference (Data centers will never be as cost effective as that old MacBook collecting dust that is plenty capable of running a 1B model that can compete with ChatGPT 3.5 and all the use cases that it already was good at like writing high school essays, recipes etc.) The only thing that is sticking is Markdown (SKILLS.md, AGENTS.md)
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
Don't underestimate the capitalists. We've seen this many times in the past--most recently the commercialization of the Internet. Before that, phones, radio and television.
> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
This is my guess for the demand side: most people will drift away as the novelty wears off and they don't find it useful in their daily lives. It's more a "fading point" than a "breaking point."
From the investment/speculation side: something will go dramatically against the narrative. OpenAI's attempted "liquidity event" of an IPO looks like WeWork as investors get a look at the numbers, Oracle implodes in a mountain of debt, NVidia cuts back on vendor financing and some major public players (e.g. Coreweave) die in a fire. This one will be a "breaking point."
> And they would have won the AI race not by building the best model, but by being the only company that could ship an AI you’d actually trust with root access to your computer.
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
OpenClaw is a very fun project, but it would be considered a dumpster fire if any mainstream company tried to sell it. Every grassroots project gets evaluated on a completely different scale than commercial products. Trying to compare an experimental community project to a hypothetical commercial offering doesn't work.
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
I think openclaw is proving that the use case while promising is very much too early and nobody can ship a system like that that works the way a consumer expects it to work.
If you can’t see why something like OpenClaw is not ready for production I don’t know what to tell you. People’s perceptions are so distorted by FOMO they are completely ignoring the security implications and dangers of giving an LLM keys to your life.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
I used to have little cron jobs that would fire small python scripts daily to help me detect when certain clothes were on sale or in stock on a website it scraped and then send me an email or text. I was proud of that “automation”.
I guess now I’ll just use an AI agent to do the same thing instantly :(
“People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things.”
I genuinely don't understand this take. What makes OP think that the company that failed so utterly to even deliver mediocre AI -- siri is stuck in 2015! -- would be up to the task of delivering something as bonkers as Clawdbot?
No no no. It's too risky, cutting-edge, and dangerous. While fun to play with, it's not something I'd trust my 92 year old mother with dementia (who still uses an iPad) with.
No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.
And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...
OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
For example: https://x.com/michael_chomsky/status/2017686846910959668.
0. https://www.daifi.ai/
(Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
But if you connect those dots you've got people trying to date by having an AI respond to texts from potential dates which seems like you're immediately in red-flag-city and good luck keeping that secret for long enough to get whatever it is you want.
These days it is insecure however because they backdoored the e2ee and kept it backdoored for the FBI, so now Signal is the only messenger I am reachable on.
Blue bubble snobbery is presently a mark of ignorance more than anything else.
While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
The software has been where most of the complaints have been in recent years.
A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".
The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
Great hardware. Interesting, but locked down software.
If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.
$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)
I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.
On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.
Maybe the second best would be some kind of Wayland compositor.
AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.
Sure why not, what could go wrong?
"Siri, find me a good tax lawyer."
"Your honor, my client's AI agent had no intent to willfully evade anything."
Tiny open source projects can just say "use at your own risk" and offload all responsibility.
Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!
>> respond to emails
If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
But this is way too worshipful of Apple.
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
Both of which have been absolutely underwhelming if not outright laughable in certain ways.
Apple has done plenty right. These two, which are the closest to the article, are not it.
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
> And this is probably coming, a few years from now.
Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...
This idea terrifies me.
https://xkcd.com/606/
It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
See social media, bitcoin, iOS App Store, blu-ray, Xbox live, and I’m sure more I can’t think of rn.
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
- system-wide screen recording
Google has been making their own phone hardware since 2010. And surely they can call up Qualcomm and Samsung if they want to.
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
Kids can barely hand write today.
Once neural interfaces are in, it's over for keyboards and displays likely too.
That was...like 4 macbooks ago. I still have keyboards from that era. I still have speakers and monitors from that era kicking around.
We are definitely, definitely not the last generation to use keyboards.
Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
My point is that it won’t be a ‘layer’ like it is now and the technology will be completely different from what we see as agents today.
The current ‘agent’ ecosystem is just hacks on top of hacks.
Of course AI will keep improving and more automation is a given.
It's obviously broken, so no, Apple Intelligence should not have been this.
It would be fine if I could just ignore it, but they are infecting the entire industry.
Is the author implying mac minis for the low power consumption?
If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.
I use agentic coding, this is next level madness.
I don't understand why, but I've seen it enough to start questioning myself...
Probably the same people getting a macbook pro to handle their calendar and emails
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
What people are talking about doing with OpenClaw I find absolutely insane.
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
[1] https://openclaw.ai/blog/introducing-openclaw
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
You're right on the liability front - Apple still won because everyone bought their hardware and their margins are insanely good. It's not that they're sitting by waiting to become irrelevant, they're playing the long game as they always do.
Are people's agents actually clicking buttons (visual computer use) or is this just a metaphor?
I'm not asking if CU exists, but rather is this literally the driver of people's workflows? I thought everyone is just running Ralph loops in CC.
For an article making such a bold technological/social claim about a trillion dollar company, this seems a strange thing to be hand wavey about.
(Ok, I suspect this is one of the main problems.. there may be others?)
However this does not excuse Apple to sit with their thumbs up their asses for all these years.
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
They don't say here is a 1000 $ iphone and there is a 60% chance you can successfully message or call a friend
The other 40% well? AGI is right around the corner and can US govt pls give me 1 trillion dollar loan and a bailout?
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
Don't underestimate the capitalists. We've seen this many times in the past--most recently the commercialization of the Internet. Before that, phones, radio and television.
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
This is my guess for the demand side: most people will drift away as the novelty wears off and they don't find it useful in their daily lives. It's more a "fading point" than a "breaking point."
From the investment/speculation side: something will go dramatically against the narrative. OpenAI's attempted "liquidity event" of an IPO looks like WeWork as investors get a look at the numbers, Oracle implodes in a mountain of debt, NVidia cuts back on vendor financing and some major public players (e.g. Coreweave) die in a fire. This one will be a "breaking point."
So yeah, the market isn’t really signaling companies to make nice things.
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
I guess now I’ll just use an AI agent to do the same thing instantly :(
Straight up bullshit.
Steve Jobs
Saved you a click. This is the premise of the article.
no, seriously, that is a thing people are using it for
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
This is how I feel:
https://www.instagram.com/reels/DIUCiGOTZ8J/
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.