It's wild that there are as many jobs in the category "Top Executives" as in the category "Retail Sales Worker".
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
That category has a median pay of $105,350, and includes "general and operations managers" as well as "chief executives". I assume it includes executives of very small enterprises.
Good point. To take it one step further, if they are including 'general managers' and 'operations managers' in this bucket, then that should include the GM and Ops Manager at places like retail stores as well (for example, every Best Buy location has both positions, I'm sure it's similar for Walmart and other big box retailers too).
Remember that exec tech salaries are extreme outliers. I worked for an exec in manufacturing. He had full p&l responsibility for a business segment with ~150 employees, $27 million in revenue at 40% gross margins, and a production plant. His total comp was ~$300k.
Now just think of the comp levels in sectors like government, education, etc.
> Remember that exec tech salaries are extreme outliers.
It's the combination of tech and big or fast growing companies.
People who operate in FAANG or Silicon Valley bubbles (or who spend too much time on Blind) can lose track of what salaries look like in the rest of the world.
I often share Buffer's open salary page because their compensation is actually pretty normal from all of the data I've seen and hiring I've done: https://buffer.com/salaries
Every time it gets posted there are comments from people aghast that the software engineers "only" make $200K and in disbelief that the CEO's salary is "only" $300K.
When people think "top executives" they think of a very, very small group of people making tens of millions of dollars a year or much more. The reality is that that's not the case.
These categories are extremely broad. Top Executive includes general managers, legislators, school superintendents, mayors, city administrators, and a lot of other government jobs. The name is misleading, it's basically non-frontline management.
Chief Executives is actually a specific sub-category of it and is, obviously, much smaller.
If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
> If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
The study doesn't say it went into the 1%'s pockets. It says it went to 2 places:
1) The salaries of corporate employees
2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
#2 only works if the public is allowed to invest when the new technology is in its early stages, which is currently not the case. Microsoft went public in 1986 at a valuation of $2.3 billion (in today's dollars). What's OpenAI / Anthropic going to be worth by the time they IPO? $1 trillion? $2 trillion?
> Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
Yes, but shares are not at all uniformly distributed. Tim Cook owns 3.28 million shares of AAPL. For comparison, the 50 million Vanguard customers have to divide 1.3 billion shares amongst them, averaging about 26 shares of AAPL each.
> And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
The majority of those end up getting bought by larger software companies.
Overall capital ownership is increasingly concentrated among a small number of elites.
A good indicator that someone is simply being dogmatic and not arguing in good faith (e.g. actually trying to understand someone's POV, and being open to being proven wrong in their assumptions) is when it takes them 5-20 minutes to reply until a particularly good point is made and then they disappear into the ether.
I think getting into the weeds on whether $80k or $100k or $120k/yr is a middle class sort of misses the point, but at least with my eyes it is hard to argue you're middle class if you're making more than about $150k at the most.
Even the GP, which I directionally agree with, says "upper-middle class is people making ~$200k/yr" but you're deep into the top quintile by that point, probably top 10%. I don't know what percentile I consider "upper middle" but it's definitely lower than top 10%.
If existing capital starts to generate excessive profits, more capital will be built, which will require human labor and will make the original capital less valuable.
In theory. In practice, the excessive capital of the incumbent allows them to price out or buy the budding competition, or the legislators, so as to protect their position.
The natural state of a capitalist system is the monopoly.
If AI being a million billion zillion times more productive at doing bullshit jobs nets in very little economic gain, then that lays bare the net economic value of all our bullshit jobs.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
> lays bare the net economic value of all our bullshit jobs.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
> In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
Does the work you do provide more or less value to the company than your salary? Where does the difference go? If your killer feature closes a $5M deal, who gets that money?
We live as capitalist serfs. Someone else gets all the value you create, and you should be grateful for the peanuts they toss back to you.
I think it's gonna mirror how the white collar classes, coastal elites, professional managerial class, whatever you want to call them, sold the countries industrial base to the far east. They got a little bit of money out of it but the biggest gains were the material wealth. $1 widgets instead of $2 widgets. All the people who weren't hurt by it got to live with more material plenty. Of course the nominal values of things didn't go down, but that's just inflation which is somewhat separate of an effect.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Curious as someone that doesn't experience the issue but assumes that your system Accessibility settings, maybe high-contrast, would be useful instead of expecting individual sites to tailor their color palette... does that not work?
This comment prompted me to find out about colour filters for mac os. I enabled the red/green filter, which made it easier to see the differences on the site, however the downside is it affects a lot of other colors and images on other sites, so is not a feasible solution, for me at least.
I toggle it on and off with a keyboard shortcut on a rare occasion colors are hard to read for me. Mostly use it on my phone actually (it's a triple click of the lock button on my iPhone). There are shortcuts on Windows and MacOS too. Doesn't seem like it would be too inconvenient for someone that actually suffers from color blindness or a sight issue, I would expect they'd run into the issue more commonly than me and would then know how to solve it for themselves.
A lot more inconvenient for others to have to pick colors that satisfy all potential sight issues, which is primarily why I think it should be an OS solution rather than an individual creator's responsibility. It's not that I don't care about those with the sight issue, it's purely about who is responsible for creating a reasonable solution. And honestly, there's no way every creator is going to study accessibility and so it's just a never ending uphill battle. If you had a tool in your system already that could help, why wouldn't you use it?
Data is coming from BLS. Their data lags the true state of affairs, and their growth projections are never reliable. Remember when they touted from 2000-2010 that Actuaries are the hottest growing field with the best forward looking outlook?
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
Their data may not be the latest but isn't it's more reliable. I used to participate in the BLS, and this numbers are submitted by employer every two weeks.
No one can predict everything perfectly. This is just the guidance based on the data that was reported. AI is advancing faster than anyone can imagine and no one knows the impact - good or bad.
lol i always wondered how actuary ever crossed the radar of my partner in college and this must have been it. hey they just finished up their FCAS cert and they are riding quite high and quite comfy. but it is for sure a very small pool of people just due to the immense work needed to get that point.
Insights from a real estate perspective: Most of the jobs that have the highest AI exposure are office jobs. Clerks, assistants, secretaries, software developers, bookkeepers, customer service, lawyers, etc. There has been a narrative the past couple years that office real estate was recovering as companies returned to office. If AI job losses materialize, it looks like there may be a second hit to that sector.
- Analyze users’ needs and then design and develop software to meet those needs
Recommend software upgrades for customers’ existing programs and systems
Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
Ignoring the sentence that admits they can be the same ("Programmers work closely with software developers, and in some businesses their duties overlap.").
Programmers is like a translator; somebody else came up with what to do and you're doing the mechanical work of converting words into C++.
Programmer as defined here, in my experience, is a job that has never really existed. Sure, they've tried many times to create this divide - going back to the beginning of programming (originally considered secretarial work) - but ultimately programmer is still making many design decisions when typing out code.
Programmers often are grouped into two broad types: Applications programmers and systems programmers. Applications programmers usually are oriented toward business, engineering, or science. They write software to handle specific jobs, such as a program used in an inventory control system or one to guide a missile after it has been fired. They also may work alone to revise existing packaged software. Systems programmers, on the other hand, maintain the software that controls the operation of an entire computer system. These workers make changes in the sets of instructions that determine how the central processing unit of the system handles the various jobs it has been given and communicates with peripheral equipment, such as terminals, printers, and disk drives. Because of their knowledge of the entire computer system, systems programmers often help applications programmers determine the source of problems that may occur with their programs.
Programmers write programs according to the specifications determined primarily by computer software engineers and systems analysts. (Separate statements on computer software engineers and on computer systems analysts, database administrators, and computer scientists appear elsewhere in the Handbook.) After the design process is complete, it is the job of the programmer to convert that design into a logical series of instructions that the computer can follow. ... In practice, programmers often are referred to by the language they know, as are Java programmers, or the type of function they perform or environment in which they work, which is the case for database programmers, mainframe programmers, or Web programmers.
Software engineers working in applications or systems development analyze users’ needs and design, construct, test, and maintain computer applications software or systems. Software engineers can be involved in the design and development of many types of software, including software for operating systems and network distribution, and compilers, which convert programs for execution on a computer. In programming, or coding, software engineers instruct a computer, line by line, how to perform a function. They also solve technical problems that arise. Software engineers must possess strong programming skills, but are more concerned with developing algorithms and analyzing and solving programming problems than with actually writing code. (A separate statement on computer programmers appears elsewhere in the Handbook.)
Pre-dot com boom it was lumped together with a small call out to "application" vs "system". With the dot com boom, the more senior role of "computer software engineer" was described while the pejoratively described "code monkey" was the "computer programmer".
That distinction between the two may not exist today. However, it takes a long time for those things to change.
May look the same as a worker but if you're a corporation hiring an H1B worker the difference between computer programmer and software developer is a notable difference in the budget bylines.
In older distinctions, there were Systems Developers and Application Developers and Computer Programmers. The distinction largely was around that "Computer Programmers took the specifications from Developers and implemented them."
It feels like the intent was that "Programmers" were the ones doing the routine / lower skill tasks while the Developers were the ones that did the specification and architecture.
Those got juggled around and largely people getting listed as "Computer Programmer" is going down as the company relists them as Software Developer.
This is also part of the confusion of "Web Developer" which is also in there.
It reflects what government thought management thought title and roles were some years ago.
15-1132 Software Developers, Applications
Develop, create, and modify general computer applications software or specialized utility programs. Analyze user needs and develop software solutions. Design software or customize software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. May supervise computer programmers.
15-1133 Software Developers, Systems Software
Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. Set operational specifications and formulate and analyze software requirements. May design embedded systems software. Apply principles and techniques of computer science, engineering, and mathematical analysis.
15-1131 Computer Programmers
Create, modify, and test the code, forms, and script that allow computer applications to run. Work from specifications drawn up by software developers or other individuals. May assist software developers by analyzing user needs and designing software solutions. May develop and write computer programs to store, locate, and retrieve specific documents, data, and information.
Note that the specifying part of it isn't done by the programmers but the other roles.
15-1134 Web Developers
Design, create, and modify Web sites. Analyze user needs to implement Web site content, graphics, performance, and capacity. May integrate Web sites with other computer applications. May convert written, graphic, audio, and video components to compatible Web formats by using software designed to facilitate the creation of Web and multimedia content. Excludes "Multimedia Artists and Animators" (27-1014).
Right, I'm a Computer Programmer but any job with that title is likely horrible. But having the title Software Engineer doesn't magically make me an engineer. All word games.
The BLS classifies them as different roles. In essence: Software developers plan, computer programmers implement. Which in many cases might be the same person, but it has always been true that one person can hold multiple jobs.
That's not a distinction that actually exists in the real world. This makes me wonder what other made-up distinctions they are claiming in industries I'm less familiar with.
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
I would speculate that careers that require Master's degrees tend to be more saturated, and the result of that is qualification creep. Examples of this include teaching, social work, library sciences, etc.
It could also be that masters degrees concentrate in fields with lower compensation. Teachers are in high demand, but yet they still tend to have something beyond an undergrad.
Wow, I had no idea the reason my peers and I can't find another position in less than 12 months is because the market for software developers is growing faster than average!
Every year US absorbs 120k+ H1B+L1+OPT new visa holders. Considering there are 1.9M software engineers, market has to grow by 5% every year just to stand still. Add US graduates and you are talking about 10% growth required just to maintain employment. It's not realistic long term.
Congress/president should pause H1B visas or hike up fee to 200-500K so that only truly exceptional talent are allowed in. Right now it's just give away to corporations that are laying off people by tens of thousands.
The current $100K fee doesn’t apply to people changing from a student visa. This was long the path of people in software dev or other high tech careers: get a masters or PhD in the U.S., then get an H1B to start working. For those already on H1B after starting on that path, again the fee does not apply if they want to change jobs and have the new employer sponsor their H1B. So hiking that fee to $200K or more wouldn’t really change things much, at least in tech.
1) how many of these people leave the country in this analysis.
2) OPTs likely will get h1b/l1s/leave the country and are being counted distinctly.
3) not all h1b/l1/OPTs are for tech. majority for sure, but there's a conversation factor.
specially in the current situation that green cards are much harder to obtain and many OPTs don't find a job, I expect 1 to be much larger than in the past.
Oh, there's a name for it! I've sometimes been struggling to verbalize in the past the logical issue I perceived with the "immigrants steal are jobs" absolutists, and this is a useful reference.
200-500k would make a large negative impact in healthcare. Specialty doctors cannot be trained in a snap, and there are limits on how many MDs and DOs are churned out of schools.
So healthcare industries turn to H1Bs to hire specialty positions in underserved / rural areas. The alternative is to shut these facilities down, which has other negative aspects to communities.
I was surprised to hear in this thread that there is a physician shortage in the US, because my understanding was that most Americans go to university and that doctors are paid well. Why aren't more graduates pursuing careers in medicine?
It turns out that they are, but (if I do not misread the situation) there is a regulatory bottleneck:
>The United States is grappling with a physician shortage, but the solution does not lie in simply opening more medical schools. As a physician-scientist and former founding dean of a medical school, I argue that the true bottleneck is not the number of medical school graduates but the insufficient number of residency training positions. Since the Balanced Budget Act of 1997, which froze the number of Medicare-funded residency slots, the United States has seen a steady increase in medical graduates, yet the availability of residency spots has stagnated. This mismatch between undergraduate medical education (UME) expansion and the lack of corresponding growth in graduate medical education (GME) is the key issue.
As this has been the arrangement since 1997, by now a graduated American child of an immigrant H1B specialist trained in a foreign country may be unable to secure a 'residency training position' and therefore unable to practice medicine in his or her own country? It sounds absurd.
Like many school systems facing teacher shortages, South Carolina’s Allendale County has looked overseas for help. A quarter of the teachers in the rural, high-poverty district come from other countries.
The superintendent praises the international educators — mostly from Jamaica and the Philippines — for their skill and dedication, but she is preparing to lose some of them as the Trump administration reshapes visa programs.
Facing higher visa sponsorship costs and uncertain immigration policies, Superintendent Vallerie Cave said it feels too risky to extend some international teachers whose contracts are up or bring on others.
So, split out technology careers from H-1B so that they can be regulated with less impact on the other careers that are currently under the H-1B.
The other part would be to properly fund DOL so that they have the resources to inspect H-1B-dependent employers ( https://www.dol.gov/agencies/whd/fact-sheets/62c-h1b-depende...https://en.wikipedia.org/wiki/H-1B-dependent_employer ) more carefully and prosecute visa fraud in a more timely manner (note that this also gets to other parts that got struck down with Chevron deference so instead of DOL being able to do things administratively it requires going through the courts).
And yes, I do believe that upping the filing fees for H-1B-dependent employers would be a good thing... and auditing them to make sure that they have a butt in seat position for their employees and aren't hiring to try to make a deeper bench of poorly qualified individuals doing routine tasks that do not require a specialty technology degree.
The current (rather hamfisted) approach to trying to cut back on immigration has knock on effects that are impacting rural and remote parts America to a much greater degree than urban areas.
As far as I understand the $100k fee applies only to consulate issued H1Bs. L1 -> H1B path (via AOS) is possible without fee. (Recent) US university graduates can also use similar path from what I understand.
We will see how much the $100k fee affects things during this H1B lottery round in few weeks.
> Only about 70 employers have paid a $100,000 Trump fee on H-1B workers from outside the US since it was imposed through a September White House proclamation, a government attorney said Thursday.
I think a lot of people have just moved to L1/O1/etc visas to get around it as OP pointed out, although a lot of people are still hiring H1B's. Amazon has applied for over 2000 H1B's so far this year, which puts them on track for ~7000 for the year https://www.uscis.gov/tools/reports-and-studies/h-1b-employe...
We have hit the cap for H1B's every year and we will always do so until we get rid of the program. Cheap labor will always be in demand.
A 100k one-time fee is nothing for big employers. That's 25k/year for 4 years, and if you realize that H1B's can't easily leave their job it's obviously worth it.
Compare hiring an H1B that is stuck at their job, to an American who can leave at any time. You can pay the H1B a lower wage to compensate for the fee you paid to get them into the role. 25k/year for 4 years is worth it for not only the reduced churn that comes with training a new person, but also you don't have to pay any of the incentives that come with getting a new employee into the role like sign-on bonuses, wage bumps, benefits etc.
People applying for H1B visas are getting partially compensated in the right to legally reside in the US rather than in money. The right to legally reside in the US is something that a lot of foreigners want badly, and are willing to accept otherwise-poor compensation for; and by definition it is not something you can pay an American citizen with.
Why is the company getting to pay their employee with that legal-residence-value and therefore get a discount on compensation?
The cleaner approach is the immigrant has to pay that value in visa expenses, taxes, or something else; while the company should have to pay market rate for the position.
There's an X account which just posts universities hiring H1B's for ~half of what it would normally cost to hire people. An 80k/yr senior software developer will always be in demand, especially if the team is already predominantly non-american
Universities typically are in the public sector side of the equation... and the public sector doesn't pay any non-administrative role the Big Tech rate.
$80k/y isn't "we're paying H1-B half of what the going rate is" but rather "the state legislature has set this pay scale and we're paying everyone that amount" ... And many times, H-1B visas aren't eligible to work in those roles.
> Universities typically are in the public sector side of the equation... and the public sector doesn't pay any non-administrative role the Big Tech rate.
There's absolutely no reason government couldn't pay competitive rates for software engineers. They do it for doctors and administrators of state-owned medical centers. Not to mention football coaches
Football coaches are revenue generating for universities... software developers at universities not so much. Doctors are licensed professionals that have a decade of schooling... software developers frequently reject licensure and celebrate their lack of a formal education.
Exactly. The fact that H1B's get paid less than Americans across the board is all you really need to know about the issue. There IS no reasonable counter argument.
It's supposedly a program for importing the best and brightest talent that doesn't exist in the US but somehow those best and brightest people get paid LESS than their American counterparts? It was never about the best and brightest it was always about bringing in cheap labor that can't leave.
Sadly I don't think we'll ever fix it either, right leaning industrialists support it because they benefit from cheap labor, and the left leaning politicians get to continue importing people who overwhelmingly vote for them. As usual the loser in the equation is the middle class American worker.
How many H1B visa holders become citizens eligible to vote for those "left leaning politicians?"
I don't think having an H1B helps you accelerate your citizenship application in anyway, and for many countries the wait for legal citizenship is decades long.
You didn't answer the question at all. Getting an H1B visa is merely the first step in a very long process towards citizenship. Decades long. For example, if you're from India and you get an H1B, it'll be roughly a decade before you can get a green card. From then you have a mandatory 5 year waiting period before naturalization. And this assumes a normal, functioning immigration process; something we definitely don't have in the US.
This can be sped up if they marry a US citizen, speeding up the process quite a bit, but it will still be several years. Now their children would be citizens, but that's another 18 years before they can vote. Politicians aren't known for playing the long game...
> The fact that H1B's get paid less than Americans across the board is all you really need to know about the issue.
Except this is literally false. Every single study I’ve seen that claims this has no real evidence - just speculation without knowing the details of the jobs or the people being hired, based on their own self-serving false comparisons to make dubious claims that similar jobs are paid differently.
Since you said “across the board”, do you think Google or Amazon pay a software engineer at the starting level differently based on immigration status? No, they don’t. Literally every manager at big tech could tell you this confidently.
I have worked at Apple for a decade, H1B's absolutely do get paid less. We have many H1B's that literally just sit around and push buttons and file bug reports, and barely know how to code. Some of them can't code at all. Ofc some of them are good engineers, but they are not even in the majority.
There is plenty of data to back this up.
>A total of 60% of all H-1B jobs are assigned wage levels that are well below the local median wage.
The EPI report is one of the commonly cited baseless reports. Dig in a level beyond their press claims and you’ll find no real method behind it that justifies their claims, because they have no actual way to compare one worker to another to know they’re equivalent and comparable for the purpose of compensation.
As for your claims about Apple - I am guessing you aren’t a manager and don’t know about how their pay scale works. I’m not doubting your claims about the quality of some workers - although I bet you’ll find plenty of non immigrant people not doing work as well. But I know the claim on pay is wrong, once you adjust for performance ratings and levels.
We have moved far-away from the notion of a factory work who's labour can easily be traced to the output.
I think in general we have to question what work one does - not in a negative way - I think its healthy to do so. Standard economic models and thinking are pretty dated and don't really reflect reality as the world of work evolves.
H1Bs are not cheap labor. They’re almost always pricier than the alternative to the company. This is a myth that is ultimately rooted in racism more than facts. Most of the top H1B filers - big tech companies in particular - pay literally identically for the same job. They have fixed pay structures internally, in part because if you don’t, you could face discrimination lawsuits - but mostly to just not lose the competition for talent.
But the cost to the company isn’t the cost of the pay anyways. It’s also the cost in lost time of the H1B process, the fees you pay as part of the process, the costs of law firms you have to hire, the cost of time delays, the risk of the immigration process not working out. Those work out to a lot more value than 25K/year.
An H1B is also not stuck in their job - you can transfer H1Bs.
I do not see how the facts you present call into question the basic logic that as you increase the availability of a commodity, say labour, you anticipate its price to diminish. All of the immigrant workers could be better-compensated and more productive than all of the American workers, and still their presence could drive the price of labour for native workers in that sector down. E.g., if there is a shortage of repairmen certified to fix some medical equipment, introducing a glut of new repairmen who are even more productive will fail to reduce the compensation of the incumbents only in exceptional circumstances.
Half the Fortune 500 is founded by an immigrant or child of an immigrant. Most of the others rely on immigrants in key positions. Pausing visas or hiking fees up doesn’t protect jobs - it just causes a future decline in the American economy. I think it’s literally cheaper in terms of the country’s future to just pay those who can’t get jobs to take a one-way flight elsewhere, if they’re not able to compete, than to make it harder to get talented people to move here.
So immigrants are in fact taking away the jobs? Do you have the same opinion of illegal immigrants jumping the border and taking jobs from average Americans?
I find this argument extremely funny because when immigrations are taking the white collar jobs, you guys get anti immigrants, tighten the visa stuff, but when blue collar and low level jobs are taken by illegal folks you turn and blind eye and noone is illegal in stolen land login.
I 100% agree that H1B has been extremely abused by folks from specific country running body shop tech consultancies but the solution is not to hike up the fees to 200k-500k.
The 100k fee by Trump admin is already showing effects in the job market. Most companies are not readily sponsoring H1B visa anymore, getting a big tech job as a intl student is already tough and only exceptional ones are getting such jobs.
I honestly don't see that much hypocrisy on this point. People in tech who are supportive of expansive rights for foreigners to immigrate to the US generally ground their argumentation in either claims that it's immoral for the US to limit immigration (the view characterized by the slogan "no one is illegal on stolen land"), or claims that they benefit from immigration even if they are competing for jobs with immigrants. And often the people making these claims are socially adjacent to immigrants in their workplace or other social circles.
Meanwhile, the people in tech who oppose immigration often do bring up the same argument you do - that it's bad to allow immigrants to compete with blue collar American citizen labor even if this competition would make some things that these white-collar tech workers buy cheaper - or ground their opposition to immigration in negative effects of immigrants on American society that aren't directly related to competition for blue-collar jobs (generally, that the presence of large numbers of immigrants has bad cultural or political consequences for the US as a whole).
The political fight over immigration among white-collar tech workers I think has more to do with battling moral claims, or different visions of what the US should look like culturally and politically, than it does over purely-materialist job competition concerns that they are hypocrites about when the job competition is happening to blue-collar workers.
There's lies, damned lies, and then: there's statistics.
You have to counter the growth in jobs based on how many new people there are to take them, the location in which they are, and somewhat weirdly other jobs.
Plenty of people feel so dejected at the current state of things that they leave computer work entirely making "openings" where there isn't actually any growth.
Like all things that you try to understand: a single datapoint, when averaged, is like trying to calculate the heat from the sun by looking through a telescope at jupiter. It will give you a far-out tiny facet of data that only makes sense when coalesced with a hundred other ones.
Interestingly, it seems from these statistics the median wage for individuals with a Master's is lower than a Bachelor's. I wonder if that's because of immigrants who pursue higher education for visa reasons skewing the data.
Anecdotally, many people get a bachelor's degree to check a box for job applications, whereas many people get a master's degree because they love the field and/or are afraid to leave school.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Other possible reason could be many or most Masters degrees not conferring additional pricing power, and those people’s Bachelors degrees also confer lower pricing power.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Masters seems to be a common theme in a few lower paying expansive fields like social work and education. I don't think that someone with a masters is typically making less in the same field all else equal.
Are childcare and kindergarten teachers really exposed to AI? In theory, we could put a class of 30 children in front of chatbots with one supervisor. But I doubt we would chose to do this as a society. If office work becomes more automated, early childhood education is actually one area I'd expect to take up the Slack. I can't imagine a situation where we have millions of unemployed former office workers but we leave them idle and let our children waste away in front of screens.
For most working-class Americans, education is a form of job-training.
In the AI maximalist world where humans are obsolete and cannot contribute to the economy in any meaningful way, there is actually no reason for public education to exist beyond being a free day care for non-rich people. Why learn algebra/calculus at all if the AIs can do it? Why should the US invest billions of dollars into public education instead of data centers?
I hope the US and AI leaders are still "speciesist" in that they put humans first. I hope AI will cure all illnesses, unlock space travel, and lead to flourishing of humanity, not just a flourishing of datacenters. It's also possible that AI just cleave societies in half and we are all worse off for it.
I thought the same as gp, that putting teachers at high risk invalidates the whole visualization. If this is intended to be useful for future career planning, with meaningful gradations between specializations, than it should exist in the probability space where human agency still matters. And in that space, from a Riccardian and political economy perspective, high human-touch jobs with strong public unions should be among the safest.
Childcare and education requires a specific tolerance, mindset and passion to be effective though. I'd be curious how many previously-PMs or HR drones or email jockeys would be adequate (let alone thrive) in an environment where there are next-to-nonexistent budgets, and you're servicing literal babies and tiny children lol
On second thought, client service folks might do extremely well here!
As a current parent, I assumed this was due to people having fewer kids, not AI. Additionally, with childcare centers becoming more expensive, many more families are looking to be stay at home parents or using grandparents / relatives to watch their kids during work hours.
There are a lot of education and curriculum companies pitching basically this- replace those 'expensive' teachers with aides making minimum wage as all they need to do is recite curriculum and help them log in to be evaluated.
In which theory? And if you can do anything in theory, then there is no justifiable "but" or any excuse. The only problem is your own ability to realize it or unexpected situation. A theory is a fact, a proven hypothesis, with all its parts such as formulas, laws, or a force as in the THEORY of gravitation. And no, you don't have one, and I assure you that you've never had a theory in your life.
That could work in ideal world where children behave nicely, and are eager to learn. But in reality that's not the case. Especially in high school big part of teacher's job is keeping order and being the authority figure. Good luck replacing that with LLM.
My takeaway here: 3.XT $ of US salaries are the TAM for AI companies.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
The replace-work TAM is overstated because it fails to address transaction costs, which are astronomical when refactoring work and dislodging stakeholders with sunk costs. Coding is now the leading app for AI now because it had already been factored to support division of labor, outsourcing, and remote work.
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Your math is missing the fact that Apple products are differentiated from their competitors. If AI becomes a ubiquitous commodity, it's not worth 300B/y.
The point is that the market is big, and capturing a small part of it would make these companies wildly successful; and there's no need to make unemployment inevitable - I think this diminishes the AI Doom story quite a bit.
It absolutely diminishes the AI Doom story, but it also diminishes the AI Will Be The Golden Goose story, they are two sides of the same coin. The AI companies can only capture that surplus if there are not competitive substitutes available.
In the same way that Amazon gets rich taking ElasticSearch for free and charging for hosting, Amazon will take free models and charge to host them. The companies building frontier models have massive R&D costs and no moat.
The bosses already hate their workers and are mad that they have to pay them a cent. Would they really accept paying another 10% on their wages to make their workers 10% more productive? When there is significant active competition between the providers of core models and huge pressure to reduce prices?
In the US, since the 1970s virtually all technologically-driven productivity gains have been captured by the top 10% (who own 90% of all public equity). (See, e.g., https://www.epi.org/productivity-pay-gap/ .)
So no, little or none of the AI productivity gains will go to workers, barring significant changes in public policy like universal basic income and the massive tax increases necessary to implement it.
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.
In my experience LLMs often have really solid insights in the thinking chains then vomit a nonsense score that doesn't make sense.
Now I'm not sure if this is actually an LLM only thing. Because I think people probably do similar when you ask them to give a number to things without providing a concrete grading rubric...
He originally vibe coded this as AI Exposure, but deleted that one because people were misinterpreting it. Someone mirrored that one here https://joshkale.github.io/jobs/ . EDIT: D'oh, I now see Karpathy didn't delete it, he just made it into a "Digital AI Exposure" button.
Like, IT helpdesk? Yes. Almost all of the tickets I create as a "knowledge worker" to my enterprise helpdesk are sloved by an AI assistant - group ownership, adding me to an app, etc.
I'm colorblind as well and what's fascinating to me is that this is the second AI created chart in a week I've seen that I can't read. Surprisingly I've found such agressively colorblind-unfriendly charts to be far less common when created by humans.
I don't have any color discrimination deficiencies, but it is my understanding that for various types of signage, the move has been towards RED=bad/danger/etc, and BLUE (instead of green)=good/safe/etc.
For color deficiencies, different lightnesses are safe e.g. dark for loss and light for gain (could be dark reds for loss and light greens for gain, but don't mix the lightnesses). Other options are icons/shapes (like up/down arrows) or pattern fills (like stripes for loss).
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
small business is the majority of employment. Think of an indi-coffee shop, the person taking your order may very well be the ceo technically. So there's a lot of "top executives".
No license visible as far as I can tell. For EU markets, Eurostat publishes comparable occupational data through ISCO-08 and the EU's Joint Research Centre has their own AI exposure methodology — so the data is there to build it.
Cool site and Andrej is the man. But the BLS data...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
Does the LLM understand or consider "rent seeking"? Lot's of high-paying jobs and entire industries seem to be propped-up by those same people who already have the power.
the fun thing about being an AI researcher is seeing that all of the "we think AI won't affect these jobs" metrics are wrong
deep down you all know something is just going to randomly get released one week in the near future that makes you go "well pack it up boys", or you just haven't been paying attention
to clarify - just like the site says - I don't think those jobs are going away, maybe entry level will have the same issues as some industries are encountering, but ideas of relative immunity are completely wrong
I'd like to see this but - not sure if it is already - adjusted by total pay. so # employed * average salary.
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
It's the other way around. Cashier's spend their 4 percent, where's the lawyers probably save it. Though of course median salary for the two categories means 4 percent change is different in absolute dollars
> You can be an expert in one field, and have no idea what you're doing in another.
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
This used to just be funny, but when it started to intersect with politics it began to actively contribute to destroying society. It isn't funny anymore.
(I don't think Karpathy's job data here is destroying society, this is a more generalized observation).
It is wild that ya'll are hating on a website that visualizes data.
That's like table stakes standard common practice for software engineers for decades.
This is the equivalence of telling a Designer that can't create infographics on anything but principled design subjects -- or else they're out of line. Any research or data they might use isn't relevant because they're not exerts? lol?
THIS:
>
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
<
Its especially(!) very common for people who made an exit and are now "wealthy" - sure they can afford to have an oppinion on everything, but very often they are just talking bullshit, thinking: "hey, I made it in field X, so why do not try field Y".
Esp the "MBA crowd" is famous for this: For whatever reason they think they are more intelligent than ana engineer who filed a patent, e.g. (while most of the MBA bobos would fail just in acquiring all documents required for this)
Other example: If you wrote once a book and it got traction, even if you are not a proven expert you will be invited to television shows etc. (and MORE than the people who are real experts with proven track record)
The VIEW could be AI slop, but underlying CONTENT has some meaning.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
I'm not saying that your assessments are wrong. But you were talking about how valuable this content is, and I don't understand how the insight you claimed to get from the visualization ("There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs") could at all be discernible from the visualization.
BLS outlook is comically bad. For example, BLS had pharmacists' outlook as amazing all throughout the 2010s, while /r/pharmacy and sdnforums had a constant stream of posts complaining about declining pay and quality of life at work, all while the pharmacy business' profit margins and number of employers declined.
What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.
This (tech) career has proven to be so disappointing, and it's all the stuff around the actual work. I love working on computers.
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
This is 99.44% slop! You are completely correct. The "exposure" is based entirely on vibes and does not correspond to observable reality. Down here in the real world the very first sector that is being disrupted is manual farm labor. They are out here with machine vision and quadcopters picking fruit. But according to the prompt that produces the treemap, manual labor has an exposure rank of zero.
Treemaps are used to show hierarchical data. But here he doesn't even bother to show more than one-level of hierarchy. If I want to find, e.g. Police, it's near impossible, since I have to scan with my eyes (when, again, it's trivial to add another rectangle to show Law Enforcement or other).
In addition, little work is done to separate the classes. He has probation officers in the same node as teachers, completely separate from law enforcement.
It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over.
It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
Re: kitchen appliance analogies, I stand by my "AI is a dishwasher" analogy.
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
And other people choose to wash dishes by hand and they're fine with it and not significantly less productive. The use of a dishwasher wasn't forced on everyone.
It is significantly less productive to hand wash dishes. But that’s fine to do manually if you wish for something that takes up maybe half an hour of your own time every several days. It’s not fine if washing dishes is your job. No company is going to hire an artisanal dish hand washer that refuses to use a dishwasher.
I was taught to give them a quick rinse but let the dishwasher make them sparkly clean. This avoids clogging the dishwasher's pipes with excessive food waste. Certainly any piece of food you could pick up between your fingers must be scraped in the bin before going in the dishwasher (or before hand washing).
> And other people choose to wash dishes by hand and they're fine with it and not significantly less productive. The use of a dishwasher wasn't forced on everyone.
That's completely, demonstratively false. Our dishwasher broke and we couldn't replace it for a month for different reasons – it was a complete nightmare. Without dishwasher:
- You need to have a space to store dirty dishes
- You must wash them right away, unless you want smell of rotten food that attracts all sorts of nasty from insects to rodents
- You need to have a big enough kitchen sink to wash comfortably
- You need to have a steady supply of hot water in the kitchen
- You need to have a supply of latex gloves, unless you want your hands to look like they're 50 years old
- You need to have a drying rack
- It takes a shitton of time compared to loading dishwasher, starting it and forgetting about it
- You need to clean up everything after you're done
I can tell you that I didn't observe a single hand-wash-only holdout.
Perhaps such holdouts existed at a point, but a restaurant can only flatter the ego of their performatively-unproductive seniors for so long. Competition exists.
> Hand-washing dishes also, from what I understand, uses more energy and water than the dishwasher does.
Correct, more energy, detergent, and water. Dishwashers are more efficient than what you can do by hand because they effectively manage their water usage.
A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
Dishwashers are so efficient because they rinse everything off the dishes with about ~1 gallons of water, they drain the water, then use detergent in the second run which gets off the tougher food stains, another 1 gallons of water. Then they rinse with another gallon of water.
Dishwashers maximize getting food particulates into dirty water in a way that you can't really sanely do by hand.
Ten gallons to hand wash is crazy. I have and use a dishwasher but when I hand-wash I use maybe two gallons of straight hot water. I wash everything, give it a minimal rinse with the sprayer and then hand dry to remove any remaining soap suds or water.
If I hand wash, I wash as I go. It takes maybe 5 minutes to wash up dishes from breakfast or lunch, maybe a little more for a big dinner, maybe not.
Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings. But I've noticed a lot of people pre-wash by hand and then load the dishwasher. I don't understand that, if I'm going to "pre-wash" anything I'll just wash it completely and put it away.
5 minutes of most sinks running is 10 gallons of water. (Most kitchen sinks are 2 gallons per minute).
> Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings.
I agree. If you aren't filling the dishwasher then you are probably wasting water. However, a full dishwasher is going to be a real water/energy saver. Especially if you aren't washing the dishes before putting them in the dishwasher. (I know a decent number of people do that. It's a hard habit to break).
Who runs the water constantly? I don't. I put a stopper in the drain, get some hot water in the sink, then turn the water off. Wash everything, give it all a quick rinse, then dry.
> A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
I'm pro-dishwasher, but you could use much less water handwashing.
If I don't have a dishwasher, my normal method is to stopper one side of my sink, squirt some dish soap on the first few dishes, and run just enough water to wet the dishes. Then I scrub some dishes, run the water (into the stoppered sink) just to rinse them as I transfer to the dish rack, then turn off the water and repeat. The dirtiest dishes that have the most food stuck on get done last so they get the most time soaking in the soapy rinse water from the rest of the dishes. I can do a full dishwasher load with one side of my sink maybe 1/4 full of water.
Time how long you run the sink while washing and rinsing. If you run it for more than 1.5 to 2 minutes, you've used more water than the dishwasher would have.
I'm collecting all the water in the sink, I can measure the volume directly. 10 cm of water in my sink in about 13 litres. My dishwasher is specced for 16.5 - 29.7 litres on the "Energy Saver" cycle that I normally use.
(The "normal" cycle is specced for 11.0-27.7 litres but uses more electricity, which is more expensive than water.)
This is in fact true (in the US at least), but part of why it is true is that people don't wash dishes the way they used to (with multiple bins of soapy + rinse water) and instead just run a bunch of hot water.
Modern high-efficiency dishwashers probably beat the most efficient humans now, but that's relatively recent and not a huge margin (and may not get the same results).
I use the time I spend to hand-wash my dishes as a time to pause and to let my mind wander.
Having the hands in water is soothing.
And its a pleasant feeling, where cleaning is part of the food workflow : I cook, I eat, I clean (the kitchen, the dishes, my teeth).
I hate home dishwashers: you have to play Tetris after each meal to fill them, trying not to get your hands/arms dirty, then you have to let it do the work, and now you have to spend a few minutes to get the dishes out and store them where they should be, even though most of them are not linked to a meal you just had.
Maybe worse, you could unload the dishwasher at a time completely unrelated to food, so that breaks the link.
On the other hand, having worked in restaurants, industrial dishwashers are awesome.
From my experience, restaurants hand-wash some stuff (anything that needs scrubbing such as cookware) and use dishwashers for light-soil service items (plates, glasses, cutlery). But these aren't dishwashers like you have at home. They run very hot water and complete a wash/rinse in just minutes.
This is a great analogy, because just like AI, microwaves are good for quick fixes, tasks where you don't really care about the quality and would rather minimise the effort.
I think the analogy is a bit inaccurate here when people are talking about automation.
Microwaves do one thing, but they do it reliably. Microwaves didn't affect the culinary industry because cooking is far more than just heating food, and many tasks are very difficult to automate. LLMs are more general-purpose - the average Joe is now relying on them as a source of truth, advice and mental work across the board. However, LLMs can't be guaranteed to always be reliable, it's all probabilistic. The threat of automation here is in taking away a lot of the less important or less complex work. Low impact + high precision (microwave) vs. high impact + low precision (AI)
A better analogy might be computers, self-driving cars, or humanoid robots, since unlike microwaves, they can actually improve. Meanwhile microwaves were more or less the same since their invention.
I know it's not the point of the comment but it's a bit of a flawed analogy. Microwaves have wone to a large extent, such that people without them are a bit of an oddity, and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
> cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
This is an incredible self-report. If you consider microwaved meals to be your default method of cooking and not something primarily for reheating leftovers or defrosting frozen meat, I sincerely hope you've gotten your cholesterol and blood pressure checked recently. That is not normal.
Not to mention the amount of plastic they're adding to their body and the amount of trash they're creating. I know cooking for one can be arduous, but meal prep is a thing.
I haven't used my oven since buying a counter top air fryer (and a sous vide) a couple years ago. I can't think of a single reason why anyone needs a full size oven on a daily basis unless you're cooking for a large family.
Owning a counter top air fryer requires you to have enough counter space for one, I have been in kitchens where there is an oven built into the stove but counter space is at a premium.
I’d also say that while I like my air fryer oven, I would prefer to do some of the bigger things like a whole bird in the oven. It’s cheaper to buy a whole bird for meal prep.
I’m from northern Europe. I might use the micro to heat up leftovers or a cup of water for tea or whatever in a pinch, but in this household (and at all my friends’), the stove and the oven cooks the food. I know literally no-one who could say they cook most meals in the micro.
I didn’t have a microwave oven before we bought a house. It took up too much space to justify, for such a relatively rarely-used appliance.
Same. Microwave is mainly used for defrosting or warming up leftovers. Maybe baking a potato in a rush, it works and it's faster but it's not as good as oven-baked.
Most houses still have ovens. Microwaves are pretty widespread as well. But, their main job is to warm up food which was cooked in an oven (either locally or at a centralized oven in a food manufacturing factory). Microwave and ovens are mostly complementary tools.
Although, the analogy seems sort of useless, in that the food preparation ecosystem is really not any less complex than the program creation ecosystem, so it doesn’t offer any simplification.
When I had neither I found it convenient to buy a small oven - the size of a microwave. It performs both functions. It doesn't reheat things as quickly as a microwave.
I've lived without a microwave for a long time and it's only a little bit inconvenient because things take longer to reheat.
Seems like a lot of people are dunking on this comment with anecdata.
Thankfully there is real data if we want to know how microwaves are used. Survey below says they are used a bit more than ovens, but half as much as cooktops/stoves. Varies by cohort and meal.
Ovens are a special occasion thing in my house because our oven is huge and I can usually do the same thing in the air fryer, which is just a small convection oven.
> and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
That really only makes sense if for households with a toaster oven, single adults, childless couples, and retired people. A toaster oven makes a lot more sense for small meals, in part because it can heat up much faster than a full oven.
Otherwise, a daily family meal isn't a special occasion.
There’s a bit of irony here. A lot of commercial kitchens already rely heavily on microwaves and rapid heating equipment. In many restaurants the microwave is a very important tool in the workflow rather than something unusual. Do your friends not eat out much?
Sort of, although there's importance nuance. One would be surprised how often microwaves get used in proper commercial kitchens, as in places making their own food & not reheating stuff from a central commissary. But it's not being used in the way one likely pictures when they hear this. An example is that microwaves are great for par cooking vegetables, especially potatoes.
> [...] and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
Not true in my household, in my parent's, in my in-laws, or any of my closest friends'. And none of us are cooks, so it's not a niche thing.
I'm sure in a lot of households the microwave oven is the primary form of cooking, but it's important to look outside the bubble before reporting trends.
It’s a great analogy because it is something that is everywhere, that everyone does use from time to time, but the idea that it magically displaces everything forever (with no downsides) is naively optimistic
(The original phrase was not just made up, it was sourced from actual news articles and marketing about microwave ovens, that’s why it feels relevant to a hype cycle like this)
You also see this kind of naive optimism if you go look at illustrations from the early 1900s. People believed everything would eventually be a machine: that a machine would feed you, wake you up in the morning, physically move everything within your home etc. And yeah those things are possible to do, but in reality they aren’t practical and we do not actually use machines to do everything because it has costs
So, you know how people talk about AIs as dumb pattern matchers?
So, you know how looking at one pattern and then just saying "this one will be like that one?" without considering the similarities and differences is similar to what people complain about AIs doing?
Consider: Unlike my Microwave, Claude can work on Claude. Unlike my Microwave, Claude gets better at more things. Unlike my microwave, we do not know what causes Claude to work so well. My Microwave cannot improve the process that makes my microwave.
Also, um.
I'm not sure if you noticed?
But machines are everywhere.
I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon
gas out of my basement and another one scoops my cat's poop.
> I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon gas out of my basement and another one scoops my cat's poop.
You’re kind of missing the point a bit. Yes, machines are everywhere but the details are very different.
The machines don’t magically do that stuff for you. You have to buy them, plug them in, turn them on and off. Lots of people don’t have any at all. They can’t do most things unsupervised. There are still lots and lots of tasks for which a machine exists, that people will still do entirely manually
There is a naivety to these predictions that is chipped away by the mundane details of having to exist in the real world. Cost, effort etc
AI being bad isn't in conflict with AI winning or taking over. I think all of those things are true. I think what we currently call social media is bad. And it's won. No conflict there either.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
I don’t doubt the intelligence of the OP though I question their wisdom and I doubt they know how to surf. They are more or less correct in their assessment of the current state of things and where things are heading, but this would entail a significant existential risk. Having an natural aversion to our own destruction is probably a sensible approach going forward.
again, grateful for the better words :) it's funny, I'm pretty charismatic in my community spaces IRL, but I constantly displease the HN hivemind
i think i need more patience -- i seem to fall into a certain tone due to my low expectations, and it's likely a self-fulfilling process which i am complicit in
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
There are a lot of things that people were saying were fads that ended up being fads. There are also a lot of things that people were saying were fads that weren't. Nobody knows. Anyone who confidently says "AI is inevitable" or "AI is just a fad" is full of shit. They don't have a crystal ball, and they don't know what the future holds.
>It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
I'm wondering if you're confusing "AI" with "LLMs" here.
I think LLMs are the equivalent of someone with a PhD in English literature and a few other things, and can be very intelligent and literate without being particularly good with numbers.
On the other hand you have plenty of machine learning numbers that are absolute beasts at everything number-related. I'm assuming you wouldn't put George RR Martin in charge of building your datasets.
Asking an LLM to analyze data directly doesn’t work. But they’re great at writing scripts to analyze (and visualize) data. Anthropic just figured this out last week and gave Claude a mode that does that for you.
This. I only ask LLMs to summarize non-critical stuff, i.e. just give me a general summary of all the work done over the past week.
If I were in need of hard analytics you can be damn sure I'd have it build a tool with a solid suite of tests following a rigorous process to ensure the outputs are sound. That's the difference between engineering and vibing.
Yes, you have to calibrate the effort to the task, you can't just blindly vibecode it. But if you treat it like a new college hire who still remembers their stats course, rather than a senior analyst who will just come back with the right answer, you can do some pretty high-level stuff that's trustworthy. It's so fast that it's no problem to double/triple check everything and even do it with multiple methods.
He is talking about the same thing as you, no? As you point out, the more AI exposure (red), the more likely to have higher wages (green). Which suggests that those who are embracing AI are those who are thriving the most. Same as what he suggested.
Whether people are adopting AI or not, everybody doing the same kind of job gets the same number for exposure to AI.
You can claim that AI is creating a Jevons paradox situation and making companies hire as crazy the people it nominally replaces. But then you would have to point any instance of that happening, because it's clearly not there either.
I think a lot of the pushback comes down to your attitude. The way you're talking about AI is like how the crypto bros talked about bitcoin. Just being very insistent on your point of view is a red flag. Either you can present new data to convince people, or your insistence will just look like it's emotional rather than rational.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
Well what do you mean by "works" the guys on twitter screaming about "Have fun being poor", were by and large just trying to scam you. Like, I could phone up your grandmother and convince her she's got a virus and she needs to transfer her life savings to me before the hackers get it, that could make me rich, does that "work". I don't know what Crypto actually worked for beyond - creating a target rich environment for scammers, a neat way to buy drugs, and good way for criminals and rogue nations to launder money.
Ah HNs favorite strawman the "dogmatically and ideologically opposed to anything AI" person who, from my experience, largely doesn't exist.
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
Uh huh.. but the data in Andrej's visualizer is showing software development growth outlook is at 15% (much faster than average)
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
What doesn’t make sense to me about the AI Inevitabilism Embrace Or Die trope is how there’s going to be a sudden trap door which will eliminate all the naysayers which can be avoided by Embrace. Because that doesn’t cohere well with how autonomuous AI is or will be.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
> definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
It is free for you to say this, because if you're wrong, there will be no consequences. Words are cheap. No different than various CEOs saying "AI will replace these workers" and now having to hire back those they laid off. Klarna, Salesforce, etc. Will be a great comment to reference in the future to capture the exuberance of the times.
> Some companies that announced large headcount reductions because of AI have since revised their talent strategies or have faced public criticism. Klarna, for example, the Swedish fintech that offers “buy now, pay later” e-commerce loans, reduced its human workforce by 40% between December 2022 and December 2024 as it invested in AI. (The company used a hiring freeze and natural attrition, not layoffs to achieve this cut.) But in 2025 the company’s CEO told Bloomberg that Klarna was reinvesting in human support, explaining that prioritizing lower costs had also led to “lower quality.” A spokesman told HBR that the company has hired about 20 people to deal with customer service cases the AI assistant can’t handle, and that the use of AI “changes the profile of the human agents you need in the customer support role.” The language-learning company Duolingo announced that AI would be used to replace many human contractors, and it faced considerable criticism on social media.
> For one, AI typically performs specific tasks and not entire jobs. As an example, Nobel laureate Geoffrey Hinton stated in 2016 that it was “completely obvious” that AI would outperform human radiologists within five years. A decade later, there is no evidence that a single radiologist has lost a job to AI—in part because radiologists perform many tasks other than reading scan images. Indeed, there is a substantial shortage of them.
* Companies are "AI washing" layoffs, blaming artificial intelligence for workforce reductions they would have made anyway, according to OpenAI CEO Sam Altman.
* A Resume.org survey found that 59% of hiring managers say they emphasize AI's role in layoffs because it "is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints".
* The stated reason for the layoff matters more than the fact of the layoff, and framing cuts as proactive restructuring around AI can result in a valuation boost, even if the technology doesn't actually work.
> The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
I'm very confused how you can put up such an obvious strawman, say all these wildly unsubstantiated things, and yet still get engagement. Who are you even talking to?
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
https://www.bls.gov/ooh/management/top-executives.htm
Apparently "top executive" median pay is $105,350 per year: https://www.bls.gov/ooh/management/top-executives.htm
Now just think of the comp levels in sectors like government, education, etc.
If you click the link it mentions "general and operations managers". They're tossing a lot of different roles into the category.
It's the combination of tech and big or fast growing companies.
People who operate in FAANG or Silicon Valley bubbles (or who spend too much time on Blind) can lose track of what salaries look like in the rest of the world.
I often share Buffer's open salary page because their compensation is actually pretty normal from all of the data I've seen and hiring I've done: https://buffer.com/salaries
Every time it gets posted there are comments from people aghast that the software engineers "only" make $200K and in disbelief that the CEO's salary is "only" $300K.
Can you elaborate?
Chief Executives is actually a specific sub-category of it and is, obviously, much smaller.
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
1) The salaries of corporate employees 2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
Yes, but shares are not at all uniformly distributed. Tim Cook owns 3.28 million shares of AAPL. For comparison, the 50 million Vanguard customers have to divide 1.3 billion shares amongst them, averaging about 26 shares of AAPL each.
> And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
The majority of those end up getting bought by larger software companies.
Overall capital ownership is increasingly concentrated among a small number of elites.
Because no matter what fairy tales you want to believe in your $20 "invested" in palantir won't make you a "shareholder" lmao
Lots of middle class people have graduated into upper-middle class: https://www.aei.org/research-products/report/the-middle-clas...
Wealth inequality is still a problem. But it's not just the people at the very top benefitting.
https://images.seattletimes.com/wp-content/uploads/2017/12/9...
https://www.peoplespolicyproject.org/wp-content/uploads/2020...
https://datawrapper.dwcdn.net/CvQar/full.png
https://static.guim.co.uk/ni/1415721490539/Wealth_line-chart...
Upper-middle class is people making ~$200k/year.
A lot of people have moved from middle class to upper middle class over the last decade. Both those categories are outside the 1%.
The upper limit and the median of the household income quintiles as of 2022 are:[0]
I think getting into the weeds on whether $80k or $100k or $120k/yr is a middle class sort of misses the point, but at least with my eyes it is hard to argue you're middle class if you're making more than about $150k at the most.Even the GP, which I directionally agree with, says "upper-middle class is people making ~$200k/yr" but you're deep into the top quintile by that point, probably top 10%. I don't know what percentile I consider "upper middle" but it's definitely lower than top 10%.
[0] https://taxpolicycenter.org/statistics/household-income-quin...
For a business, the question is whether you can make more money by doing more ambitious things.
The natural state of a capitalist system is the monopoly.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
3. The stock market is rooted in reality.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
Agriculture is a good example of that: http://www.johnhearfield.com/History/Breadt.htm
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
Does the work you do provide more or less value to the company than your salary? Where does the difference go? If your killer feature closes a $5M deal, who gets that money?
We live as capitalist serfs. Someone else gets all the value you create, and you should be grateful for the peanuts they toss back to you.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
(don't forget to "allow pasting" in [chrome] console first)
https://www.vischeck.com/run.html
It twiddles colors in a physiologically-aware manner to improve legibility for colorblind observers:
https://github.com/wadelab/VischeckTinyeyes/blob/main/websit...
A lot more inconvenient for others to have to pick colors that satisfy all potential sight issues, which is primarily why I think it should be an OS solution rather than an individual creator's responsibility. It's not that I don't care about those with the sight issue, it's purely about who is responsible for creating a reasonable solution. And honestly, there's no way every creator is going to study accessibility and so it's just a never ending uphill battle. If you had a tool in your system already that could help, why wouldn't you use it?
Co-authored-by: Claude <claude@anthropic.com>
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
No one can predict everything perfectly. This is just the guidance based on the data that was reported. AI is advancing faster than anyone can imagine and no one knows the impact - good or bad.
Putting aside the slop facade place atop the data....why would we trust the data?
Yay!
>Computer Programmers: -6%
Oh no
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer Programmers median pay according to BLS: $98,670 per year
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Software developers typically do the following:
- Analyze users’ needs and then design and develop software to meet those needs Recommend software upgrades for customers’ existing programs and systems Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer programmers typically do the following:
- Write programs in a variety of computer languages, such as C++ and Java
- Update and expand existing programs
- Test programs for errors and fix the faulty lines of computer code
- Create, modify, and test code or scripts in software that simplifies development
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Programmers is like a translator; somebody else came up with what to do and you're doing the mechanical work of converting words into C++.
Developer involves coming up with what to do.
Hence programmers is a lower paid position.
There's no functional difference between a 'software developer' and a 'programmer'. they're just synonyms that sometime pay differently.
Computer Programmers (1997) https://web.archive.org/web/19971111101442/http://www.bls.go...
Which then transitioned in the '00s toComputer Programmer (2004)
https://web.archive.org/web/20041103085206/http://www.bls.go...
andComputer Software Engineers (2004) https://web.archive.org/web/20041110033114/http://www.bls.go...
Pre-dot com boom it was lumped together with a small call out to "application" vs "system". With the dot com boom, the more senior role of "computer software engineer" was described while the pejoratively described "code monkey" was the "computer programmer".That distinction between the two may not exist today. However, it takes a long time for those things to change.
It feels like the intent was that "Programmers" were the ones doing the routine / lower skill tasks while the Developers were the ones that did the specification and architecture.
Those got juggled around and largely people getting listed as "Computer Programmer" is going down as the company relists them as Software Developer.
This is also part of the confusion of "Web Developer" which is also in there.
It reflects what government thought management thought title and roles were some years ago.
---
Edit: From days of old: https://web.archive.org/web/20110616142157/https://www.bls.g...
https://web.archive.org/web/20110531043521/http://www.bls.go... https://web.archive.org/web/20110925005933/http://www.bls.go... Note that the specifying part of it isn't done by the programmers but the other roles.... And for completness
https://web.archive.org/web/20130624010204/http://www.bls.go...
Reason for hope
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
Congress/president should pause H1B visas or hike up fee to 200-500K so that only truly exceptional talent are allowed in. Right now it's just give away to corporations that are laying off people by tens of thousands.
1) how many of these people leave the country in this analysis.
2) OPTs likely will get h1b/l1s/leave the country and are being counted distinctly.
3) not all h1b/l1/OPTs are for tech. majority for sure, but there's a conversation factor.
specially in the current situation that green cards are much harder to obtain and many OPTs don't find a job, I expect 1 to be much larger than in the past.
as a more general observation, this line of reasoning does fit lump of labor fallacies: https://en.wikipedia.org/wiki/Lump_of_labour_fallacy
So healthcare industries turn to H1Bs to hire specialty positions in underserved / rural areas. The alternative is to shut these facilities down, which has other negative aspects to communities.
It turns out that they are, but (if I do not misread the situation) there is a regulatory bottleneck:
>The United States is grappling with a physician shortage, but the solution does not lie in simply opening more medical schools. As a physician-scientist and former founding dean of a medical school, I argue that the true bottleneck is not the number of medical school graduates but the insufficient number of residency training positions. Since the Balanced Budget Act of 1997, which froze the number of Medicare-funded residency slots, the United States has seen a steady increase in medical graduates, yet the availability of residency spots has stagnated. This mismatch between undergraduate medical education (UME) expansion and the lack of corresponding growth in graduate medical education (GME) is the key issue.
https://pmc.ncbi.nlm.nih.gov/articles/PMC12256077/
As this has been the arrangement since 1997, by now a graduated American child of an immigrant H1B specialist trained in a foreign country may be unable to secure a 'residency training position' and therefore unable to practice medicine in his or her own country? It sounds absurd.
https://apnews.com/article/teacher-jobs-h1b-j1-visa-online-s...
That's at 125% above the poverty level.
That was the way that it started... the H-1A ( https://en.wikipedia.org/wiki/H-1A_visa ) was for nurses and H-1B was for other specialty occupations.
Nurses transitioned to the H-1C visa (which expired in 2009 https://www.uscis.gov/archive/h-1c-registered-nurse-working-... )
So, split out technology careers from H-1B so that they can be regulated with less impact on the other careers that are currently under the H-1B.
The other part would be to properly fund DOL so that they have the resources to inspect H-1B-dependent employers ( https://www.dol.gov/agencies/whd/fact-sheets/62c-h1b-depende... https://en.wikipedia.org/wiki/H-1B-dependent_employer ) more carefully and prosecute visa fraud in a more timely manner (note that this also gets to other parts that got struck down with Chevron deference so instead of DOL being able to do things administratively it requires going through the courts).
And yes, I do believe that upping the filing fees for H-1B-dependent employers would be a good thing... and auditing them to make sure that they have a butt in seat position for their employees and aren't hiring to try to make a deeper bench of poorly qualified individuals doing routine tasks that do not require a specialty technology degree.
The current (rather hamfisted) approach to trying to cut back on immigration has knock on effects that are impacting rural and remote parts America to a much greater degree than urban areas.
https://kansasreflector.com/2025/10/18/how-new-foreign-worke...
https://www.alaskasnewssource.com/2026/03/14/sen-murkowski-i...
Since the fee went up to $100k, I’m not aware of any companies still sponsoring hires who need a new H1B
We will see how much the $100k fee affects things during this H1B lottery round in few weeks.
> Only about 70 employers have paid a $100,000 Trump fee on H-1B workers from outside the US since it was imposed through a September White House proclamation, a government attorney said Thursday.
A 100k one-time fee is nothing for big employers. That's 25k/year for 4 years, and if you realize that H1B's can't easily leave their job it's obviously worth it.
Compare hiring an H1B that is stuck at their job, to an American who can leave at any time. You can pay the H1B a lower wage to compensate for the fee you paid to get them into the role. 25k/year for 4 years is worth it for not only the reduced churn that comes with training a new person, but also you don't have to pay any of the incentives that come with getting a new employee into the role like sign-on bonuses, wage bumps, benefits etc.
The cleaner approach is the immigrant has to pay that value in visa expenses, taxes, or something else; while the company should have to pay market rate for the position.
Pulling up my alma mater... https://www.openthebooks.com/wisconsin-state-employees/?Year...
The various roles that you'll find for software developers: Sr Is Specialist, Is Tech Srv Cons/Adm, Sr Inform Proc Conslt, Sr Systems Programmer
And you can pull up the pay scale at https://hr.wisc.edu/standard-job-descriptions/?job_group=Inf...
$80k/y isn't "we're paying H1-B half of what the going rate is" but rather "the state legislature has set this pay scale and we're paying everyone that amount" ... And many times, H-1B visas aren't eligible to work in those roles.
There's absolutely no reason government couldn't pay competitive rates for software engineers. They do it for doctors and administrators of state-owned medical centers. Not to mention football coaches
https://openpayrolls.com/justin-wilcox-146812860
Trying to make state government competitive with Big Tech salaries (especially in states that aren't California) would not go over well with voters.
While private sector deals with layoffs and uncertainty, the public sector has things like "budget not good this year? Two weeks unpaid vacation for everyone" - https://docs.legis.wisconsin.gov/code/executive_orders/2003_... ... 401k matching? How about a fully funded pension instead. https://reason.org/commentary/the-wisconsin-retirement-syste...
Football coaches are revenue generating for universities... software developers at universities not so much. Doctors are licensed professionals that have a decade of schooling... software developers frequently reject licensure and celebrate their lack of a formal education.
It's supposedly a program for importing the best and brightest talent that doesn't exist in the US but somehow those best and brightest people get paid LESS than their American counterparts? It was never about the best and brightest it was always about bringing in cheap labor that can't leave.
Sadly I don't think we'll ever fix it either, right leaning industrialists support it because they benefit from cheap labor, and the left leaning politicians get to continue importing people who overwhelmingly vote for them. As usual the loser in the equation is the middle class American worker.
I don't think having an H1B helps you accelerate your citizenship application in anyway, and for many countries the wait for legal citizenship is decades long.
Just look at the data for how people vote by demographic group (race).
Nonwhite groups overwhelmingly vote blue, H1B's are overwhelmingly nonwhite. This is not controversial.
https://www.pewresearch.org/politics/2025/06/26/voting-patte...
https://www.reddit.com/r/neoliberal/comments/aoodm8/how_the_...
This can be sped up if they marry a US citizen, speeding up the process quite a bit, but it will still be several years. Now their children would be citizens, but that's another 18 years before they can vote. Politicians aren't known for playing the long game...
Except this is literally false. Every single study I’ve seen that claims this has no real evidence - just speculation without knowing the details of the jobs or the people being hired, based on their own self-serving false comparisons to make dubious claims that similar jobs are paid differently.
Since you said “across the board”, do you think Google or Amazon pay a software engineer at the starting level differently based on immigration status? No, they don’t. Literally every manager at big tech could tell you this confidently.
There is plenty of data to back this up.
>A total of 60% of all H-1B jobs are assigned wage levels that are well below the local median wage.
https://www.epi.org/press/a-majority-of-migrant-workers-empl...
As for your claims about Apple - I am guessing you aren’t a manager and don’t know about how their pay scale works. I’m not doubting your claims about the quality of some workers - although I bet you’ll find plenty of non immigrant people not doing work as well. But I know the claim on pay is wrong, once you adjust for performance ratings and levels.
I think in general we have to question what work one does - not in a negative way - I think its healthy to do so. Standard economic models and thinking are pretty dated and don't really reflect reality as the world of work evolves.
H1Bs are not cheap labor. They’re almost always pricier than the alternative to the company. This is a myth that is ultimately rooted in racism more than facts. Most of the top H1B filers - big tech companies in particular - pay literally identically for the same job. They have fixed pay structures internally, in part because if you don’t, you could face discrimination lawsuits - but mostly to just not lose the competition for talent.
But the cost to the company isn’t the cost of the pay anyways. It’s also the cost in lost time of the H1B process, the fees you pay as part of the process, the costs of law firms you have to hire, the cost of time delays, the risk of the immigration process not working out. Those work out to a lot more value than 25K/year.
An H1B is also not stuck in their job - you can transfer H1Bs.
I find this argument extremely funny because when immigrations are taking the white collar jobs, you guys get anti immigrants, tighten the visa stuff, but when blue collar and low level jobs are taken by illegal folks you turn and blind eye and noone is illegal in stolen land login.
I 100% agree that H1B has been extremely abused by folks from specific country running body shop tech consultancies but the solution is not to hike up the fees to 200k-500k.
The 100k fee by Trump admin is already showing effects in the job market. Most companies are not readily sponsoring H1B visa anymore, getting a big tech job as a intl student is already tough and only exceptional ones are getting such jobs.
Meanwhile, the people in tech who oppose immigration often do bring up the same argument you do - that it's bad to allow immigrants to compete with blue collar American citizen labor even if this competition would make some things that these white-collar tech workers buy cheaper - or ground their opposition to immigration in negative effects of immigrants on American society that aren't directly related to competition for blue-collar jobs (generally, that the presence of large numbers of immigrants has bad cultural or political consequences for the US as a whole).
The political fight over immigration among white-collar tech workers I think has more to do with battling moral claims, or different visions of what the US should look like culturally and politically, than it does over purely-materialist job competition concerns that they are hypocrites about when the job competition is happening to blue-collar workers.
There's lies, damned lies, and then: there's statistics.
You have to counter the growth in jobs based on how many new people there are to take them, the location in which they are, and somewhat weirdly other jobs.
Plenty of people feel so dejected at the current state of things that they leave computer work entirely making "openings" where there isn't actually any growth.
Like all things that you try to understand: a single datapoint, when averaged, is like trying to calculate the heat from the sun by looking through a telescope at jupiter. It will give you a far-out tiny facet of data that only makes sense when coalesced with a hundred other ones.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
In the AI maximalist world where humans are obsolete and cannot contribute to the economy in any meaningful way, there is actually no reason for public education to exist beyond being a free day care for non-rich people. Why learn algebra/calculus at all if the AIs can do it? Why should the US invest billions of dollars into public education instead of data centers?
I hope the US and AI leaders are still "speciesist" in that they put humans first. I hope AI will cure all illnesses, unlock space travel, and lead to flourishing of humanity, not just a flourishing of datacenters. It's also possible that AI just cleave societies in half and we are all worse off for it.
On second thought, client service folks might do extremely well here!
What you mention here is the exact thing why my earlier relationship went bust, because I didnt have any of these, then the children arrived :-X
Stand in front with a gun while mobs come to burn down the data center that took their jobs.
(I think I'm half joking).
1: https://www.businessinsider.com/robot-dogs-quadruped-data-ce...
I doubt it'd be old hit-and-run, more like small scale Ukraine with drones filled with explosives.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
I formalized my thoughts here: https://jodavaho.io/posts/ai-jobpocolypse.html
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Potable water is far more important than AI or iPads ever will be, but the world's most valuable water company only does about 5B/year in revenue: https://en.wikipedia.org/wiki/American_Water_Works
In the same way that Amazon gets rich taking ElasticSearch for free and charging for hosting, Amazon will take free models and charge to host them. The companies building frontier models have massive R&D costs and no moat.
Given the state of AI (LLMs) - they still need a very human (skilled driver) to operate
So no, little or none of the AI productivity gains will go to workers, barring significant changes in public policy like universal basic income and the massive tax increases necessary to implement it.
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
Now I'm not sure if this is actually an LLM only thing. Because I think people probably do similar when you ask them to give a number to things without providing a concrete grading rubric...
Whats the outlook like?
Thank you!
Some argue more apps means more plumbers and problem solvers, but we'll see.
Speculation: I think infra roles including on-prem are growing again, based on things I've seen.
If you turn on the color filters in accessibility settings in macOS you can see what the contrast could look like to a colorblind person.
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
Needs
- [utility] add filter by keyword / substring match, e.g majority of visualized reports are un-labeled requiring hovering with a mouse pointer
- [improve discovery] add sort by demographic / pop impact, e.g largest block is 7m ('Hand laborers and movers') and default sorted to bottom-left
I'd like to use this on my website and also see if I can create variations for some of the major EU markets.
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
...word?
https://apnews.com/article/trump-jobs-firing-f00e9bf96d01105...
deep down you all know something is just going to randomly get released one week in the near future that makes you go "well pack it up boys", or you just haven't been paying attention
to clarify - just like the site says - I don't think those jobs are going away, maybe entry level will have the same issues as some industries are encountering, but ideas of relative immunity are completely wrong
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
All the "research" on the site comes from a single LLM prompt.
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
This used to just be funny, but when it started to intersect with politics it began to actively contribute to destroying society. It isn't funny anymore.
(I don't think Karpathy's job data here is destroying society, this is a more generalized observation).
This is the equivalence of telling a Designer that can't create infographics on anything but principled design subjects -- or else they're out of line. Any research or data they might use isn't relevant because they're not exerts? lol?
It is a website that visualizes the output of an LLM prompt and passes it off as data. Big difference between the two.
Its especially(!) very common for people who made an exit and are now "wealthy" - sure they can afford to have an oppinion on everything, but very often they are just talking bullshit, thinking: "hey, I made it in field X, so why do not try field Y".
Esp the "MBA crowd" is famous for this: For whatever reason they think they are more intelligent than ana engineer who filed a patent, e.g. (while most of the MBA bobos would fail just in acquiring all documents required for this)
Other example: If you wrote once a book and it got traction, even if you are not a proven expert you will be invited to television shows etc. (and MORE than the people who are real experts with proven track record)
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
* Yes software engineering jobs can grow - by increasing demand for custom software thanks to coding agents unlock
* AI can impact it - by making software engineers LLM code approvers
What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
I guess that was to be expected...
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
https://news.ycombinator.com/newsguidelines.html
In addition, little work is done to separate the classes. He has probation officers in the same node as teachers, completely separate from law enforcement.
Here's some much better examples:
- https://www.washingtonpost.com/nation/2022/05/04/abortion-nu...
- https://flowingdata.com/2015/04/02/how-we-spend-our-money-a-...
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
It’s time to start microwave cooking or drown
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
It is significantly less productive to do both, and yet…
That's completely, demonstratively false. Our dishwasher broke and we couldn't replace it for a month for different reasons – it was a complete nightmare. Without dishwasher:
- You need to have a space to store dirty dishes
- You must wash them right away, unless you want smell of rotten food that attracts all sorts of nasty from insects to rodents
- You need to have a big enough kitchen sink to wash comfortably
- You need to have a steady supply of hot water in the kitchen
- You need to have a supply of latex gloves, unless you want your hands to look like they're 50 years old
- You need to have a drying rack
- It takes a shitton of time compared to loading dishwasher, starting it and forgetting about it
- You need to clean up everything after you're done
I can tell you that I didn't observe a single hand-wash-only holdout.
Perhaps such holdouts existed at a point, but a restaurant can only flatter the ego of their performatively-unproductive seniors for so long. Competition exists.
Hand-washing dishes also, from what I understand, uses more energy and water than the dishwasher does.
Correct, more energy, detergent, and water. Dishwashers are more efficient than what you can do by hand because they effectively manage their water usage.
A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
Dishwashers are so efficient because they rinse everything off the dishes with about ~1 gallons of water, they drain the water, then use detergent in the second run which gets off the tougher food stains, another 1 gallons of water. Then they rinse with another gallon of water.
Dishwashers maximize getting food particulates into dirty water in a way that you can't really sanely do by hand.
If I hand wash, I wash as I go. It takes maybe 5 minutes to wash up dishes from breakfast or lunch, maybe a little more for a big dinner, maybe not.
Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings. But I've noticed a lot of people pre-wash by hand and then load the dishwasher. I don't understand that, if I'm going to "pre-wash" anything I'll just wash it completely and put it away.
5 minutes of most sinks running is 10 gallons of water. (Most kitchen sinks are 2 gallons per minute).
> Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings.
I agree. If you aren't filling the dishwasher then you are probably wasting water. However, a full dishwasher is going to be a real water/energy saver. Especially if you aren't washing the dishes before putting them in the dishwasher. (I know a decent number of people do that. It's a hard habit to break).
My wife and her family :D. Water conservation mentality is a battle.
I'm pro-dishwasher, but you could use much less water handwashing.
If I don't have a dishwasher, my normal method is to stopper one side of my sink, squirt some dish soap on the first few dishes, and run just enough water to wet the dishes. Then I scrub some dishes, run the water (into the stoppered sink) just to rinse them as I transfer to the dish rack, then turn off the water and repeat. The dirtiest dishes that have the most food stuck on get done last so they get the most time soaking in the soapy rinse water from the rest of the dishes. I can do a full dishwasher load with one side of my sink maybe 1/4 full of water.
(The "normal" cycle is specced for 11.0-27.7 litres but uses more electricity, which is more expensive than water.)
Modern high-efficiency dishwashers probably beat the most efficient humans now, but that's relatively recent and not a huge margin (and may not get the same results).
I use the time I spend to hand-wash my dishes as a time to pause and to let my mind wander. Having the hands in water is soothing.
And its a pleasant feeling, where cleaning is part of the food workflow : I cook, I eat, I clean (the kitchen, the dishes, my teeth).
I hate home dishwashers: you have to play Tetris after each meal to fill them, trying not to get your hands/arms dirty, then you have to let it do the work, and now you have to spend a few minutes to get the dishes out and store them where they should be, even though most of them are not linked to a meal you just had. Maybe worse, you could unload the dishwasher at a time completely unrelated to food, so that breaks the link.
On the other hand, having worked in restaurants, industrial dishwashers are awesome.
Fridge OTOH, not so much.
Microwaves do one thing, but they do it reliably. Microwaves didn't affect the culinary industry because cooking is far more than just heating food, and many tasks are very difficult to automate. LLMs are more general-purpose - the average Joe is now relying on them as a source of truth, advice and mental work across the board. However, LLMs can't be guaranteed to always be reliable, it's all probabilistic. The threat of automation here is in taking away a lot of the less important or less complex work. Low impact + high precision (microwave) vs. high impact + low precision (AI)
LLMs require a lot more effort.
This is an incredible self-report. If you consider microwaved meals to be your default method of cooking and not something primarily for reheating leftovers or defrosting frozen meat, I sincerely hope you've gotten your cholesterol and blood pressure checked recently. That is not normal.
this is nuts! I use an oven every day dude - so its a special occasion is it?
The default method for cooking is using an oven or using a stove. Microwaving is for heating up left-overs for the most part.
One of the dangers of people who are too close to programming is that they think of life as binary.
I’d also say that while I like my air fryer oven, I would prefer to do some of the bigger things like a whole bird in the oven. It’s cheaper to buy a whole bird for meal prep.
Or you're batch cooking
I’m from northern Europe. I might use the micro to heat up leftovers or a cup of water for tea or whatever in a pinch, but in this household (and at all my friends’), the stove and the oven cooks the food. I know literally no-one who could say they cook most meals in the micro.
I didn’t have a microwave oven before we bought a house. It took up too much space to justify, for such a relatively rarely-used appliance.
I think OP is just an outlier.
Although, the analogy seems sort of useless, in that the food preparation ecosystem is really not any less complex than the program creation ecosystem, so it doesn’t offer any simplification.
I've lived without a microwave for a long time and it's only a little bit inconvenient because things take longer to reheat.
Thankfully there is real data if we want to know how microwaves are used. Survey below says they are used a bit more than ovens, but half as much as cooktops/stoves. Varies by cohort and meal.
Source: https://indoor.lbl.gov/publications/residential-cooking-beha...
Ovens are a special occasion thing in my house because our oven is huge and I can usually do the same thing in the air fryer, which is just a small convection oven.
That really only makes sense if for households with a toaster oven, single adults, childless couples, and retired people. A toaster oven makes a lot more sense for small meals, in part because it can heat up much faster than a full oven.
Otherwise, a daily family meal isn't a special occasion.
The food have been cooked in industrial ovens in the factory.
Not true in my household, in my parent's, in my in-laws, or any of my closest friends'. And none of us are cooks, so it's not a niche thing.
I'm sure in a lot of households the microwave oven is the primary form of cooking, but it's important to look outside the bubble before reporting trends.
You think "there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term" is wrong?
(The original phrase was not just made up, it was sourced from actual news articles and marketing about microwave ovens, that’s why it feels relevant to a hype cycle like this)
You also see this kind of naive optimism if you go look at illustrations from the early 1900s. People believed everything would eventually be a machine: that a machine would feed you, wake you up in the morning, physically move everything within your home etc. And yeah those things are possible to do, but in reality they aren’t practical and we do not actually use machines to do everything because it has costs
So, you know how looking at one pattern and then just saying "this one will be like that one?" without considering the similarities and differences is similar to what people complain about AIs doing?
Consider: Unlike my Microwave, Claude can work on Claude. Unlike my Microwave, Claude gets better at more things. Unlike my microwave, we do not know what causes Claude to work so well. My Microwave cannot improve the process that makes my microwave.
Also, um.
I'm not sure if you noticed?
But machines are everywhere.
I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon gas out of my basement and another one scoops my cat's poop.
You’re kind of missing the point a bit. Yes, machines are everywhere but the details are very different.
The machines don’t magically do that stuff for you. You have to buy them, plug them in, turn them on and off. Lots of people don’t have any at all. They can’t do most things unsupervised. There are still lots and lots of tasks for which a machine exists, that people will still do entirely manually
There is a naivety to these predictions that is chipped away by the mundane details of having to exist in the real world. Cost, effort etc
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
OP comment is not clever
i think i need more patience -- i seem to fall into a certain tone due to my low expectations, and it's likely a self-fulfilling process which i am complicit in
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
Jevons paradox was never relevant to cognitive surplus. That isn't what it's about.
Cognitive surplus only strengthens Jevons paradox. Humans are a competitive advantage for businesses in a world dominated by human needs
1. Brick and mortar is dead.
2. The internet will die.
3. What is the business model? (this one still seems to exist to this day to some extent, lol)
Reality fell between 1 and 2.
https://en.wikipedia.org/wiki/Eternal_September
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
Published AI generated code is a mild negative signal for quality, but certainly not a fatal one.
Published AI generated English writing is worthless and should be automatically ignored.
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
I think LLMs are the equivalent of someone with a PhD in English literature and a few other things, and can be very intelligent and literate without being particularly good with numbers.
On the other hand you have plenty of machine learning numbers that are absolute beasts at everything number-related. I'm assuming you wouldn't put George RR Martin in charge of building your datasets.
If I were in need of hard analytics you can be damn sure I'd have it build a tool with a solid suite of tests following a rigorous process to ensure the outputs are sound. That's the difference between engineering and vibing.
Could you elaborate on this? Is it just a claim, or is there some consensus out there based on something that it doesn't/shouldn't apply?
So... What exactly are you talking about?
Whether people are adopting AI or not, everybody doing the same kind of job gets the same number for exposure to AI.
You can claim that AI is creating a Jevons paradox situation and making companies hire as crazy the people it nominally replaces. But then you would have to point any instance of that happening, because it's clearly not there either.
a. "Has already won"
b. "Might be a year or two, or five, or ten"
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
1: https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
2: https://trueup.io/job-trend
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
brainbroken by chatbots lmao
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
Man.. I suggest you touch some grass. You are living in a bubble.
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
Companies Are Laying Off Workers Because of AI’s Potential - Not Its Performance - https://news.ycombinator.com/item?id=47401368 - March 2026
> Some companies that announced large headcount reductions because of AI have since revised their talent strategies or have faced public criticism. Klarna, for example, the Swedish fintech that offers “buy now, pay later” e-commerce loans, reduced its human workforce by 40% between December 2022 and December 2024 as it invested in AI. (The company used a hiring freeze and natural attrition, not layoffs to achieve this cut.) But in 2025 the company’s CEO told Bloomberg that Klarna was reinvesting in human support, explaining that prioritizing lower costs had also led to “lower quality.” A spokesman told HBR that the company has hired about 20 people to deal with customer service cases the AI assistant can’t handle, and that the use of AI “changes the profile of the human agents you need in the customer support role.” The language-learning company Duolingo announced that AI would be used to replace many human contractors, and it faced considerable criticism on social media.
> For one, AI typically performs specific tasks and not entire jobs. As an example, Nobel laureate Geoffrey Hinton stated in 2016 that it was “completely obvious” that AI would outperform human radiologists within five years. A decade later, there is no evidence that a single radiologist has lost a job to AI—in part because radiologists perform many tasks other than reading scan images. Indeed, there is a substantial shortage of them.
The 'AI-Washing' of Job Cuts Is Corrosive and Confusing - https://news.ycombinator.com/item?id=47401499 - March 2026
* Companies are "AI washing" layoffs, blaming artificial intelligence for workforce reductions they would have made anyway, according to OpenAI CEO Sam Altman.
* A Resume.org survey found that 59% of hiring managers say they emphasize AI's role in layoffs because it "is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints".
* The stated reason for the layoff matters more than the fact of the layoff, and framing cuts as proactive restructuring around AI can result in a valuation boost, even if the technology doesn't actually work.
> The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
AI is making CEOs delusional [video] - https://www.youtube.com/watch?v=Q6nem-F8AG8
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.