AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
> but which can be trained to the new job opportunities more easily than humans can
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.
It doesn’t matter if the revolution is bad for commoners — they will support it anyway if the aristocracy is hateful enough.
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.
Hopefully we can be a bit more precise this time around.
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
> in that every software engineer now depends heavily on copilots
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Total size of the software industry will still increase.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better than worse it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.
edit: ability without accountability is the catchier motto :)
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.
Software is now free, and all people care about is the hardware and the electricity bills.
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.
In which science fiction were the dreamt up robots as bad?
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
Perhaps this[0] will help in understanding them then:
Foundations of Large Language Models
This is a book about large language models. As indicated by
the title, it primarily focuses on foundational concepts
rather than comprehensive coverage of all cutting-edge
technologies. The book is structured into five main
chapters, each exploring a key area: pre-training,
generative models, prompting, alignment, and inference. It
is intended for college students, professionals, and
practitioners in natural language processing and related
fields, and can serve as a reference for anyone interested
in large language models.
> "However, if AI avoids plateauing long enough to become significantly more useful..."
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
I kind of want to put up a wall of fame/shame of these people to be honest.
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.
I think we might see AI being much, much more effective with embodiment.
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
Innovation in terms of helping devs do cool things has been insane.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
This same link was submitted 2 days ago. My comment there still applies.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
Ramble ramble ramble
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
That will do very well to salaries I think and everyone will be better of.
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
https://news.ycombinator.com/reply?id=44919671&goto=item%3Fi...
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
Sam Altman has expressed a preference for paying people in vouchers for using his chatbots to kill time: https://basicincomecanada.org/openais-sam-altman-has-a-new-i...
we must keep our peasants busy or they unrest due to boredom!
It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.
It doesn’t matter if the revolution is bad for commoners — they will support it anyway if the aristocracy is hateful enough.
The status quo does not go well for the avg person.
Hopefully we can be a bit more precise this time around.
we assume there must be something to transition to. very well, there can be nothing.
we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”
They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.
I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
Unless you have your own fully stocked private bunker with security detail, you will be affected.
Make of that what you will.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
When we discuss how LLMs failed or succeeded, as a norm, we should start including
- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)
Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.
This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
It's not reliable because it's not intelligent.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better than worse it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
edit: ability without accountability is the catchier motto :)
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
> Assuming LLMs reach this peak...
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.I would posit that understanding is "the current moat."
Software is now free, and all people care about is the hardware and the electricity bills.
In which science fiction were the dreamt up robots as bad?
This really misunderstands what the stock market tracks
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
This is demonstrably wrong. An easy refutation to cite is:
https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...
As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).
Perhaps this[0] will help in understanding them then:
0 - https://arxiv.org/abs/2501.09223As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
I wouldn’t want to work for or with these people.
I think we might see AI being much, much more effective with embodiment.
As a large language model developed by OpenAI I am unable to fulfill that request.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
https://jenson.org/timmy/