Agree with OP. This reminds me of fast food in the 90s. Executives rationalized selling poison as "if I don't, someone else will" and they were right until they weren't.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
Agree and disagree. It is also possible to take a step back and look at the very large picture and see that these things actually are somewhat inevitable. We do exist in a system where "if I don't do it first, someone else will, and then they will have an advantage" is very real and very powerful. It shapes our world immensely. So, while I understand what the OP is saying, in some ways it's like looking at a river of water and complaining that the water particles are moving in a direction that the levees pushed them. The levees are actually the bigger problem.
I think a more accurate and more useful framing is:
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
The statement "Game theory is inevitable. Because game theory is just math, the study of how independent actors react to incentives." implies that the "actors" are humans. But that's not what game theory assumes.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
Yes, game theory is not a predictive model but an explanatory/general one. Additionally not everything is a game, as in statistics, not everything has a probability curve. They can be applied speculatively to great effect, but they are ultimately abstract models.
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
Perhaps you don't intend this, but I intuit that you imply that Game theory's inevitability leads to the inevitability of many claims the author's claims aren't inevitable.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
>I think cooperative game theory[1] better models the dynamics of the real world.
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
I'll try and tackle this one. I think the world is getting more ads because Silicon Valley and it's Anxiety Economy are putting a thumb on the scale.
For the entirety of the 2010's we had SaaS startups invading every space of software, for a healthy mix of better and worse, and all of them (and a number even today) are running the exact same playbook, boiled down to broad terms: burn investor money to build a massive network-effected platform, and then monetize via attention (some combo of ads, user data, audience reach/targeting). The problem is thus: despite all these firms collecting all this data (and tanking their public trust by both abusing it and leaking it constantly) for years and years, we really still only have ads. We have specifically targeted ads, down to downright abusive metrics if you're inclined and lack a soul or sense of ethics, but they are and remain ads. And each time we get a better targeted ad, the ones that are less targeted go down in value. And on and on it has gone.
Now, don't misunderstand, a bunch of these platforms are still perfectly fine business-wise because they simply show an inexpressible, unimaginable number of ads, and even if they earn shit on each one, if you earn a shit amount of money a trillion times, you'll have billions of dollars. However it has meant that the Internet has calcified into those monolith platforms that can operate that way (Facebook, Instagram, Google, the usuals) and everyone else either gets bought by them or they die. There's no middle-ground.
All of that to say: yes, on balance, we have more ads. However the advertising industry in itself has never been in worse shape. It's now dominated by those massive tech companies to an insane degree. Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV. Hell many are straight up shitty VHS replays of ads I saw in the fucking 90's, it's wild. We're now seeing AI video and audio dominate there too.
And going back to tech, the platforms stuff more ads into their products than ever and yet, they're less effective than ever. A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian, but the problem is nobody notices the background wallpaper, which means despite the saturation, ads get less attention then ever before. And worse still, the folks who don't block cost those ad companies impressions and resources to serve those ads that are being ignored.
So, to bring this back around: the coalition that makes ads "inevitable" isn’t consumers or creators, it's investors and platforms locked into the same anxiety‑economy business model. Cooperative resistance exists (ad‑blockers, subscription models, cultural fatigue), but it’s dwarfed by the sheer scale of capital propping up attention‑monetization. That’s why we see more ads even as they get less effective.
I'll just take the very first example on the list, Internet-enabled beds.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
Game theory is not inevitable, neither is math. Both are attempts to understand the world around us and predict what is likely to happen next given a certain context.
Weather predictions are just math, for example, and they are always wrong to some degree.
Because the models aren't sophisticated enough (yet). There's no voodoo here.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
But the world is not deterministic, inherently so. We know it's probabilistic at least at small enough scales. Most hidden variable theories have been disproven, and to the best of our current understanding the laws of the physical universe are probabsilitic in nature (i.e the Standard Model). So while we can probably come up with a very good probabilistic model of things that can happen, there is no perfect prediction, or rather, there cannot be
>Because the models aren't sophisticated enough (yet). There's no voodoo here.
Idk if that's true.
Navier–Stokes may yet be proven Turing-undecidable, meaning fluid dynamics are chaotic enough that we can never completely forecast them no matter how good our measurement is.
Inside the model, the Navier–Stokes equations have at least one positive Lyapunov exponent. No quantum computer can out-run an exponential once the exponent is positive
And even if we could measure every molecule with infintesimal resolution, the atmosphere is an open system injecting randomness faster than we can assimilate it. Probability densities shred into fractal filaments (butterfly effect) making pointwise prediction meaningless beyond the Lyapunov horizon
There is strong reason to expect evolution to have found a system that is complex and changing for its control system, for this very reason so it can't get easily gamed (and eaten).
I think its hubris to believe that you can formulate the correct game theoretic model to make significant statements about what is and is not inevitable.
Game theory is only as good as the model you are using.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
> if you can shape those beliefs and that information flow, you alter the incentives
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
In a world ruled by game theory alone marketing is pointless. Everyone already makes the most rational choice and has all the information, so why appeal to their emotions, build brand awareness or even tell them about your products. Yet companies spend a lot of money on marketing, and game theory tells us that they wouldn't do that without reason
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
> Game theory makes a lot of simplifying assumptions.
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
Game theory is a model that's sometimes accurate. Game theorists often forget that humans are bags of thinking meat, and that our thinking is accomplished by goopy electrochemical processes
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
They are at best an attempt to use our tools of reason and observation to predict nature, and you can point to thousands of examples, from market crashes to election outcomes, to observe how they can be flawed and fail to predict.
I do partly disagree because Game Theory is based on an economic, (and also mentioned) reductionist view of a human, namely homo oeconomicus that does have some bold assumptions of some single men in history that asserted that we all act only with pure egoism & zero altruism which is nowadays highly critiqued and can be challenged.
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
It feels like the only aspect of Game Theory at work here is opportunity cost. For example, why shouldn't you make AI porn generation software? There's moral reasons for it, but usually, most put it aside because someone else is going to get the bag first. That exhaustive list the author enumerated are all in some way byproducts of break-things-move-fast-say-sorry-later philosophy. You need ID for the websites because you did not give a shit and wanted to get the porn out there first and foremost. Now you need IDs.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
One person has more impact than you think. Many times it's one person that is speaking what's on the mind of many and that speaking out can bring the courage to do what needs to be done for many people that sitting on the fence. The Andor TV series really taught me that. I'm working on a presentation of surveillance capitalism that I plan to show to my community. It's going to be an interesting future. Some will side with the Empire and others with side with the Rebellion.
The greatest challenge facing humanity is building a culture where we are liberated to cooperate toward the greatest goals without fear of another selfish individual or group taking advantage to our detriment.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
> Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
Game theory applied to the world is a useful simplification; reality is messy. In reality:
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
I mean, in an ideal system we would have political agency greater than the sum of individuals who would put pressure/curtail the rise of abusive actors taking advantage of power and informational asymetry to try and gain more power (wealth) and influence (wealth) in order to gain more wealth
what i'm reading here then is that those 7.999999999B others are braindead morons.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
I do disagree that some of these were not inevitable. Let me deconstruct a couple:
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
The author doesn't mean that the technologies weren't inevitable in the absolute sense. They mean that it was not inevitable that anyone should use those technologies. It's not inevitable that they will use Tiktok, and it is not inevitable for anyone, I've never used Tiktok, so the author is right in that regard.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
Re Tiktok, what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
> what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
This appears to be overthinking it: sure it's inevitable that when zero trust systems are shown to be practicable, they will be explored. But, like a million other ideas that nobody needed to spend time on, selling NFTs should've been relegated to obscurity far earlier than what actually happened.
> Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
> The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers.
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
I think you could say it's inevitable because of the size of both the good AND bad opportunities. Agree with you and the original point of the article that there COULD be a better way. We are reaping tons of bad outcomes across social media, crypto, AI, due to poor leadership(from every side really).
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
> These things don't have any utility otherwise and no reason to exist outside of those.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
It was all inevitable, by definition, as we live in a deterministic universe (shock, I know!)
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
The point is that regulation could have made Bitcoin and NFTs never cause the harm they have inflicted and will inflict, but the political will is not there.
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
I've got a hypothesis that the reason short-form video like TikTok became dominant is because of the decline in reading instruction (eg. usage of whole-word instruction over phonics) that started in 1998-2000. The timing largely lines up: the rise of video content started around 2013, just as these kids were entering their teenage years. Media has significant economies of scale and network effects (i.e. it is much more profitable to target the lowest common denominator than any niche group), and so if you get a large number of teenagers who have difficulty with reading, media will adjust to provide them content that they can consume effortlessly.
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
I recognize this is very anecdotal (your observation and mine), but my gen alpha daughter approaching the teenage phase always has her head in a book. She also has a very short attention span.
That’s ridiculously US-centric. TikTok is a global phenomenon initiated by a Chinese company. Nothing would be different in the grand scale if there were zero American TikTok users.
Even if that's true, that sub-minute videos are not the apex content, that only goes to prove inevitability. Every idea will be tested and measured; the best-performing ones will survive. There can't be any coordination or consensus like "we shouldn't have that" - the only signal is, "is this still the most performant medium + algorithm mix?"
I feel that the argument here hinges on “performant”
The regulatory, cultural, social, even educational factors surrounding these ideas are what could have made these not inevitable. But changes weren’t made, as there was no power strong enough to enact something meaningful.
I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
Big tech senior software engineer working on a major AI product
I totally agree with the message in the original post.
Yes AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some and band aids for some other, but many are not even known yet.
It is essential to make AI original, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
Apologies, but I'm copy/pasting a previous reply of mine to a similar sentiment:
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
With a kinetic structure, someone went through the effort to design it to do that. With AI art, sure you ask it to do something but a human isn't involved in the creative process in any capacity beyond that
Sure, but in the case of AI it resembles the relationship of a patron to an art director. We generally don't assign artistry to the person hiring an art director to create artistic output, even if it requires heavy prompting and back and forth. I am not bold enough to try to encompass something as large and fundamental as art into a definition, though I suppose that art does cary something about the craft of using the medium.
At any rate, though there is some aversion to AI art for arts sake, the real aversion to AI art is that it squeezes one of the last viable options for people to become 'working artists' and funnels that extremely hard earned profit to the hands of the conglomerates that have enough compute to train generative models. Is making a living through your art something that we would like to value and maintain as a society? I'd say so.
So are photos that are edited via Photoshop not art? Are they not art if they were taken on a digital camera? What about electronic music?
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
I think your view makes sense. On the other hand, Flash revolutionized animation online by allowing artists to express their ideas without having to exhaustively render every single frame, thanks to algorithmic tweening. And yeah, the resulting quality was lower than what Disney or Dreamworks could do. But the ten thousand flowers that bloomed because a wall came down for people with ideas but not time utterly redefined huge swaths of the cultural zeitgeist in a few short years.
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
I'm paying infrastructure costs for our little art community, chatbot crawling our servers and ignoring robots.txt, mining the work of our users so it can make copies, and being told that I don't get because this is such a paradigm shift, is pretty great..
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
I don't know if I'm misinterpreting the word "influence", but low-effort internet memes have a lot more cultural impact than a lot of high-effort art. Also there's botnets, which influence political voting behaviour.
> low-effort internet memes have a lot more cultural impact than a lot of high-effort art.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
That's pretty much what they said about photographs at first. I don't think you'll find a lot of people who argue that there's no art in photography now.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
The problem is that LLMs are just parrots who swoop into your house and steal everything, then claim it as theirs. That's not art, that's thievery and regurgitation. To resign oneself that this behavior is ok and inevitable is sad and cowardly.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
Yes, but that electricity consumption benefits an actual person.
I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.
If you throw a computer in a trash compactor, that's a trivial amount of e-waste. If you throw a living person in a trash compactor, that's a moral tragedy.
The people who build, maintain, and own the datacenters. The people who work at and own the companies that make the hardware in the datacenters. The people who work to build new power plants to power the data centers. The truck drivers that transport all the supplies to build the data centers and power plants.
One thing I've noticed - artists view their own job as more valuable, more sacred, more important than virtually any other person's job.
They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us, but to artists it's shocking and offensive.
I think we're all just sick of having everything upended and forced on us by tech companies. This is true even if it is inevitable. It occurred to me lately that modern tech and the modern internet has sort of turned into something which is evil in the way that advertising is evil. (this is aside from the fact of course that the internet is riddled with ads)
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
Well it's not really about AI is it then; it's about Millenia of human evolution and the intrinsically human behaviours we've evolved.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
> I can only say being against this is either it’s self-interest or not able to grasp it.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
And you are dismissing all the benefits of AI (e.g. drug research, and in general all the research and development).
Yes there are dangers associated to AI, but as a society we can deal with them, just like we've managed to deal with microbiology research, nuclear power, and guns.
As someone else put it succinctly, there's art and then there's content. AI generated stuff is content.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
…and the sting is that the majority of people employed in creative fields are hired to produce content, not art. AI makes this blatantly clear with no fallbacks to ease the mind.
This post rhymes with a great quote from Joseph Weizenbaum:
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
Perhaps we need more collective action & coordination?
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
I admire volunteer work, but I don't think we should focus too hard on paths forward that summarize to "the volunteers need to work harder". If we like what they're doing we show find ways to make it more likely to happen.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software now be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing accepting eulas and letting strangers host things you rely on.
I do think AI involvement in programming is inevitable; but at this time a lot of the resistance is because AI programming currently is not the best tool for many jobs.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
For you it's "purely for pleasure," for me it's for money, health and fire protection. I heat my home with my wood stove to bypass about $1,500/year in propane costs, to get exercise (and pleasure) out of cutting and splitting the wood, and to reduce the fuel load around my home. If those reasons went away I'd stop.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
You may be saving money but wood smoke is very much harmful to your lungs and heart according to the American Lung and American Heart Associations + the EPA. There's a good reason why we've adopted modern heating technologies. They may have other problems but particulate pollution is not one of them.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
I acknowledge that risk. But I think it is outweighed by the savings, exercise and reduced fire danger. And I shouldn't discount the value to me of living in light clothing in winter when I burn wood, but heavily dressed to save money when burning propane. To stop me you'd have to compel me.
This is not a small thing for me. By burning wood instead of gas I gain a full week of groceries per month all year!
I acknowledge the risk of AI too, including human extinction. Weighing that, I still use it heavily. To stop me you'd have to compel me.
Cow A: "That building smells like blood and steel. I don't think we come back out of there"
Cow B: "Maybe. But the corn is right there and I’m hungry. To stop me, you'd have to compel me"
Past safety is not a perfect predictor of future safety.
I'm burning dead wood in a very high wildfire area. It is going to burn. The county takes a small percentage away ... to burn in huge pits. It really isn't possible that much if any of this wood will just slowly decay. All I'm doing is diverting a couple of cords a year to heat my home. There is additional risk to me, but I'm probably deferring the risk to others by epsilon by clearing a scintilla.
Probably the risk involved in cutting down trees is more than breathing in wood smoke. I'm no better at predicting which way a tree will fall than which horse will win.
The industrial revolution was pushed down the throats of a lot of people who were sufficiently upset by the developments that they invented communism, universal suffrage*, modern* policing, health and safety laws, trade unions, recognisably modern* state pensions, the motor car (because otherwise we'd be knee-deep in horse manure), zoning laws, passports, and industrial-scale sewage pumping.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
I really don't like the "everything is political" sentiment. Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
I don't think you're wrong so much as you've tread into some semantic muddy water. What did the OP mean by 'inevitable', 'political' or 'everything'?. A lot hangs on the meaning. I lot of words could be written defending one interpretation over another and the chance of changing anyone's mind on the topic seems slim.
Very good point. At that point though, I think it becomes hard to read the post and take it with specifics. Not all writing has to be specific, but now I'm just a bit confused as to what was actually being said by the author.
But You do make a good point that those words are all potentially very loaded.
> Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
>Garbage companies using refurbished plane engines to power their data centers is not inevitable
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
There's many more. Aeroderivative gas turbines are not exactly new, and they have shorter lead times than regular gas turbines right now, so everybody getting their hands on any has been willing to buy them.
Individual specific things are not inevitable. But many generic concepts are because of various market, social and other forces.
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
>Your computer changing where things are on every update is not inevitable.
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
As my family's computer guy, my dad complains to me about this. And there's no satisfactory answer I can give him. My mom told me last year she is "done learning new technology" which seems like a fair goal but maybe not a choice one can make.
You ever see those "dementia simulator" videos where the camera spins around and suddenly all the grocery store aisles are different? That's what it must be like to be less tech literate.
It's been driving me nuts for at least a decade. I can't remember which MacOS update it was, but when they reorganized the settings to better align with iOS, it absolutely infuriated me. Nothing will hit my thunder button like taking my skills and knowledge away. I thought I might swear off Mac forever. I've been avoiding upgrading from 13 now. In the past couple of updates, the settings for displays is completely different for no reason. That's a dialog that one doesn't use very often, except for example, when giving a presentation. It's pretty jarring to plug in on stage in front of dozens or even hundreds of people and suddenly you have to figure out a completely unfamiliar and unintuitive way of setting up mirroring.
I blame GUIs. They disempower users and put them at mercy of UX "experts" who just rearrange the deck chairs when they get bored and then tell themselves how important they are.
The MacOs settings redesign really bothered me too. Maybe it's the 20+ years of muscle memory, or maybe the new one really is that bad, but I find myself clicking around excessively and eventually giving up and using search. I'm with you here.
I personally agree with everything you say, and am equally frustrated with (years later) not being able to find MacOS settings quickly - though part of that's due to searching within settings being terrible. Screen mirroring is the worst offender for me, too.
However, I support ~80 non-technical users for whom that update was a huge benefit. They're familiar with iOS on their phones, so the new interface is (whaddya know) intuitive for them. (I get fewer support calls, so it's of indirect benefit to me, too.) I try to let go of my frustration by reminding myself that learning new technology is (literally) part of my job description, but it's not theirs.
That doesn't excuse all the "moving the deck chairs" changes - Tahoe re-design: why? - but I think Apple's broad philosophy of ignoring power users like us and aligning settings interfaces was broadly correct.
Funny story: when my family first got a Windows computer (3.1, so... 1992 or '93?) my first reaction was "this sucks. Why can't I just tell the computer what to do anymore?" But, obviously, GUIs are the only way the vast majority will ever be able to interact with a device - and, you know, there are lots of tasks for which a visual interface is objectively better. I'd appreciate better CLI access to MacOS settings: a one-liner that mirrors to the most recently-connected display would save me so much fumbling. Maybe that's AppleScript-able? If I can figure it out I'll share here.
... Did you just complain about modern technology taking power away from users only to post an AI generated song about it? You know, the thing taking away power from musicians and filling up all modern digital music libraries with garbage?
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
> Did you just complain...only to post an AI generated song about it?
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
It didn't augment your skills, it replaced skills you lack. If I generate art using DallE or Stable Diffusion, then edit in Krita/Photoshop/etc. it doesn't suddenly cover up the fact that I was unable to draw/paint/photograph the initial concept. It didn't augment my skills, it replaced them. If you generate "music" like that, it's not augmenting your poetry that you wish to use as lyrics - which may or may not be of good quality in it's own right - it replaced your ability to make music with it.
Computers are meant to be tools to expand our capabilities. You didn't do that. You replaced them. You didn't ride a bike, you called an Uber because you never learned to drive, or you were too lazy to do it for this use.
AI can augment skills by allowing for creative expressions - be it with AI stem separation, neural-network based distortion effects, etc. But the difference is those are tools to be used together with other tools to craft a thing. A tool can be fully automated - but then, if it is, you are no longer a artist. No more than someone that knows how to operate a CNC machine but not design the parts.
This is hard for some people to understand, especially those with an engineering or programming background, but there is a point to philosophy. Innate, valuable knowledge in how a thing was produced. If I find a stone arrow head buried under the dirt on land I know was once used for hunting by native Americans, that arrow head has intrinsic value to me because of its origin. Because I know it wasn't made as a replica and because I found it. There is a sliding scale, shades of gray here. An arrow head I had verified was actually old but which I did not find is still more valuable than one I know is a replica. Similarly, you can, I agree, slowly un-taint an AI work with enough input, but not fully. Similarly, if an digital artist painted something by hand then had StableDiffusion inpaint a small region as part of their process, that still bothers many, adds a taint of that tool to it because they did not take the time to do what the tool has done and mentally weigh each pixel and each line.
By using Suno, you're firmly in the "This was generated for me" side of that line for most people, certainly most musicians. That isn't riding a bike. That's not stretching your muscles or feeling the burn of the creative process. It's throwing a hundred dice, leaving the 6's up, and throwing again until they're all 6's. Sure, you have input, but I hardly see it as impressive. You're just a reverse centaur: https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
And for the record, I could write a multi-page rant about how Suno is not actually what I want; its shitty UI (which will no doubt change soon) and crappy reinvention of the DAW is absolutely underpowered for tweaking and composing songs how I want. We should instead be integrating these new music creation models into both professional tools and also making the AI tools less of a push-button one-stop shop, but giving better control rather than just meakly pawing in the direction of what you want with prompts.
"Ads are not inevitable." is a pretty bold statement that really damages the argument. Mixing fundamental things like that in with Juicero prevents a good will discussion.
Ads are one of the oldest and most fundamental parts of a modern society.
Mixing obviously dumb things in with fundamental ones doesn't improve the point.
The idea that the future is architected by our choices (or lack of it) is the crux of one of the opportunity spaces at the Advanced Research and Invention Agency (ARIA) in the UK.
(full disclosure: I'm working with the programme director on helping define the funding programme, so if you're working on related problems, by all means share your thoughts on the site or by reaching out!)
An interesting perspective on this is why Facebook kept the Sun's microsystems sign in their Palo Alto HQ.
Nothing in tech is inevitable, nothing in tech is irreplaceable, nothing in tech is permanent.
Shout out to the Juicero example, because there are so many people out there showing that AI can be also "just squeeze the bag with your hands".
Mobile device updates are the worst for aging parents. These devices are getting more complex to use not easier, you shouldn't have to upend your life once a year because UX design choices forces you to miss what you think is important, how to find it, or disable/enable features you don't want or used to have.
At the highest level, this becomes a question of whether we live in a predetermined universe or not. Historians do debate the Great Man vs Great Forces narrative of human development, but even if many historical events were "close calls" or "almost didn't happens" it doesn't mean that the counterfactual would be better. Discrete things like the Juicero might not have happened, but ridiculous "smart internet-connected products" that raised lots of VC money during the ZIRP era feels inevitable to me.
Do we really think LLMs and the generative AI craze would have not occurred if Sam Altman chose to stay at Y Combinator or otherwise got hit by a bus? People clearly like to interact with a seemingly smart digital agent, demonstrated as early as ELIZA in 1966 and SmarterChild in 2001.
My POV is that human beings have innate biases and preferences that tend to manifest what we invent and adopt. I don't personally believe in a supernatural God but many people around the world do. Alcoholic beverages have been independently discovered in numerous cultures across the world over centuries.
I think the best we can do is usually try to act according to our own values and nudge it in a direction we believe is best (both things OP is doing so this is not a dunk on them, just my take on their thoughts here).
I’d still say, this is the future, like it or not. See how much capital has been poured into the caldron?
None of the items is technically inevitable, but the world runs on capital, and capital alone. Tech advances are just a by product of capital snooping around trying to increase itself.
You are mistaken. The future is defined by the common man on the steet. Those are the same people who use Whatsapp, Facebook and instagram accounts heavily and regularly. They will soon become the biggest drivers of AI adaption.
The techies are drop in the ocean. You may build a new tech or device, but the adaption is driven by the crowd who just drift away without a pinch of resistance.
Because the average media incompetent Joe is easy to influence. Just spread some tinfoil hat theories, or fearmonger publicly and they have nothing else to talk about. They wouldn't even understand how bad big tech is if you forced them to research for a month. Humans are way more stupid than we believe (just look at all the people who'd love to cuddle a wild bear they meet in Yellowstone). That we managed to get on top of the food chain is a miracle.
I agree with the core point.
I’d add that modern marketing isn’t really aiming for heterogeneity, but for eventually producing behaviorally homogeneous user clusters.Many product and tech choices today are about shaping user behavior so users become more predictable and therefore more monetizable.
During this transition phase, the system tolerates (or creates) heterogeneity, but at the cost of complexity, friction, and inefficiency, which are mostly pushed onto users.
In that sense, this is less about “the future” and more about engineering markets through data, with trade-offs that are rarely made explicit
The author says>> "Not being in control of course makes people endlessy frustrated, but at the same time trying to wrestle control from the parasites is an uphill battle that they expect to lose, with more frustration as a result."
While this reaction is understandable, it is difficult to feel sympathy when so few people are willing to invest the time and effort required to actually understand how these systems work and how they might be used defensively. Mastery, even partial, is one of the few genuine avenues toward agency. Choosing not to pursue it effectively guarantees dependence.
Ironically, pointing this out often invites accusations of being a Luddite or worse.
It’s a trope now that loads of internet users will complain about Mac OS & Windows, while digging in their heels against switching to Linux. Just take your medicine people.
The fact is, most of the systems people use in their day do day that behave the way described simply require no mastery whatsoever. If your product, service, or device is locked behind learning a new skill, any skill, that will inherently limit the possible size of the audience. Far more than most realize. We can rail against this reality, but it is unforgiving. The average person who is struggling to put food on the table only has so many hours in the week to spare to stick it to the man by learning a new operating system.
It seems we’re all experiencing a form of sticker shock, from the bill for getting ease-of-use out of software that we demanded for the past few decades.
Fatalism is a part of the language of fascism. Statements like, "it is inevitable," are supposing that we cannot change the future and should submit to what our interlocutor is proposing. It's a rhetorical tool to avoid critique. Someone who says, "programming as a profession is over, AI will inevitably replace developers so learn to use it and get with the program," isn't inviting discussion. But this is not how the future works and TFA is right to point out that these things are rarely ever, "inevitable."
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
Everything is inevitable. If it happened, then it couldn't have happened otherwise. "Your computer sending screenshots to microsoft so they can train AIs on it" was inevitable, because that's what incentives pushed them to do. Vocal opposition and boycotting might become a different kind of incentive, but in most cases it doesn't work. The fact of the matter is that corporations are powerful, shareholders are powerful, the collective mass of indifferent consumers are powerful, while you are powerless.
> The fact of the matter is that... you are powerless.
Trivialities don't add anything to the discussion. The question is "Why?" and then "How do we change that?". Even incomplete or inaccurate attempts at answering would be far more valuable than a demonstration of hand-wringing powerlessness.
This is the inevitability of unfettered capitalism. The pressure is towards generating wealth, with the intended hope that the side effect will be that this produces net 'good' for society. It has worked (to varying degrees) and has enabled the modern world. But it may well be running out of steam.
I do not think that the current philosophical world view will enable a different path. We've had resets or potential resets, COVID being a huge opportunity, but I think neither the public nor the political class had the strength to seize the moment.
We live in a world where we know the price of everything and the value of nothing. It will take dramatic change to put 'value' back where it belongs and relegate price farther down the ladder.
COVID was in many ways the opposite of a reset. It further solidified massive wealth disparity and power concentration. The positive feedback loop doesn’t appear to have a good off-ramp.
The off ramp to the loop was always a crisis like a recession or war. Covid could have been such an example. If the government hadn't bailed everyone out, the pandemic would have been a mass casualty event for companies and their owners. The financial crisis of 2008 also would have been such an event if not for the bailouts. The natural state of the economy is boom and bust, with wealth concentration increasing during the booms and dropping during the busts. But it turns out the busts are really painful for everyone, and so our government has learned how to mostly keep them from happening. Turns out people prefer the ever increasing concentration of wealth an power that this results in over the temporary pain of economic depression.
As a tech person the older I get the less tech interests me.
Analogical is where I get the fun from, no more smart watch, smart tv, spotify, connected home things, automatic coffee machine, no thank you.
Sure, the technology that real people in your life, including most "normies" who only use tech to get stuff done, are using ChatGPT, but it's not "inevitable".
Everyone who runs a Google search and doesn't read past the Gemini result uses an LLM. That's easily a majority without even getting into other products.
This article has such palpable distain for the people who consume these products that it makes me wonder why the author even cares what kind of future they inhabit.
> But what is important to me is to keep the perspective of what consitutes a desirable future, and which actions get us closer or further from that.
Desirable to whom? I certainly don't think the status quo is perfect, but I do think dismissing it as purely the product of some faceless cadre of tech oligarchs desires is arrogant. People do have agency, the author just doesn't like what they have chosen to do with it...
The path we're on is not inevitable. But narratives keep it locked in.
Narratives are funny because they can be completely true and a total lie.
There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.
We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.
So I have a story I'd like to promote:
There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with open hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.
Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.
User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.
I remember the was a guy who regularly posted tech predictions and then every year adjusted and reflected on his predictions. Can anyone help me find it?
I personally really like Apple Vision and the bar it’s pushing. However, using one of these devices long term in a walled garden sounds like a nightmare for privacy and marketing abuse of users.
Inevitability just means that something WILL happen, and many of those items are absolutely inevitable:
AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.
As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.
> Not being in control of course makes people endlessy frustrated
I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.
I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of
1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.
An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.
It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.
This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!
But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.
Place the blame in the appropriate place. Something something, hammers don't kill people.
I think this person is too optimistic. Everything that will give powerful people money or influence and not get them killed is pretty much near inevitable.
I've been thinking a lot lately, challenging some of my long-held assumptions...
Big tech, the current AI trend, social media websites serving up rage bait and misinformation (not to imply this is all they do, or that they are ALL bad), the current political climate and culture...
In my view, all of these are symptoms, and the cause is the perverse, largely unchallenged neoliberal world in which the West has spent the last 30-40 years (at least) living in.
Profit maximising comes before everything else. (Large) Corporate interests are almost never challenged. The result? Deliberately amoral public policy that serves the rich and powerful.
There are oases in this desert (which is, indeed, not inevitable), thankfully. As the author mentioned, there's FOSS. There's indie-created games/movies. There's everyday goodness between decent people.
Fine piece of what it's really about. The feeling of losing one's joy and possible applause for doing a good job.
But the inevitable is not a fact, it's a rigged fake that is, unfortunately, adapted by humans which flock in such large groups, echoing the same sentiments that it for those people seem real and inevitable.
Humans in general are extremely predictable, yet, so predictable that they seem utterly stupid and imbecile.
I like the “wasn’t inevitable” list. The fact that two US corporations control 99% of phones is another one that feels about as comfortable as a rock in my boot and I hope this too is not inevitable, in the long run.
Imagine if the 80s and 90s had been PC vs Mac but you had to go to IBM for one or more critical pieces of software or software distribution infrastructure. The Cambrian explosion IBM-PC compatability didn’t happen overnight of course. I don’t think it will be (or ought to be) inevitable that phones remain opaque and locked down forever, but the day when freedom finally comes doesn’t really feel like it’s just around the corner.
> The Cambrian explosion IBM-PC compatability didn’t happen overnight of course.
There's a recording of an interview with Bill Gates floating around where he pretty much takes credit for that. He claims (paraphrasing because I listened to it almost 20 years ago) that he suggested a lot of the hardware to IBM because he knew he could repurpose DOS for it.
I also phrased this badly. Cambrian explosions by definition happen in very short time frames (“overnight”), but the conditions required to set off the explosion take a long time to brew and are few and far between.
We’re two decades into the smartphone era and my hope is that we’re still in the DEC / VAX / S370 stage, with the “IBM-PC” stage just around the corner still to come.
Also imagine that basic interactions were mediated by those monopolies: you had to print your bus ticket personally with software only available on your IBM.
I feel like it's time for a new direction in tech. Open source was a lot of fun (because I was young), but then it sort of became the default for a lot of infrastructure. I liked PG's take on startups and some cool ones came out of that era. But now the whole thing is collapsing on itself and "Silicon Valley" is broadly becoming disliked if not reviled by a lot of people, with cartoonish figures like Musk, or Joe Lonsdale calling for public hangings. The sense of wonder and innovation feels gone to me. It's still out there somewhere, though, I think, and it'd be nice to recover some of that. LLM's owned by megacorps aren't really where it's at for me - I want to see small teams in garages making something new and fun.
"This post is to underline that Nothing is inevitable."
Inevitable and being a holdout are conceptually different and you can't expect society as a whole to care or respect your personal space with regards to it.
They listed smartphones as a requirement an example. That is great, have fun with your flip phone, but that isn't for most people.
Just because you don't find something desirable doesn't mean you deserve extra attention or a special space. It also doesn't you can call people catering to the wants of the masses as "grifters".
As a counterpoint, consider reading Kevin Kelly’s “The Inevitable.” I also avoid most social media and have an aversion to all things “smart,” but these may actually be “inevitable.” In a capitalist society, these forces feel inevitable because there’s no Anubis balancing the scales. Only local decisions, often short-sighted. When demand is being created in a market, people will compete to capture it any way possible. Some of those ideas will last, and others will be a flash in the pan. It’s not clear to me that if you reran the tapes of history again, you’d get a significantly different outcome that didn’t include things like short-form video to exploit attention.
> Most old people in particular (sorry mom) have given up and resigned themselves to drift wherever their computing devices take them, because under the guise of convenience, everything is so hostile that there is no point trying to learn things, and dark patterns are everywhere. Not being in control of course makes people endlessy frustrated, but at the same time trying to wrestle control from the parasites is an uphill battle that they expect to lose, with more frustration as a result.
I'm pretty cynical, but one ray of hope is that AI-assisted coding tools have really brought down the skill requirement for doing some daunting programming tasks. E.g. in my case, I have long avoided doing much web or UI programming because there's just so much to learn and so many deep rabbit holes to go down. But with AI tools I can get off the ground in seconds or minutes and all that gruddy HTML/JavaScript/CSS with bazillions of APIs that I could go spend time studying and tinkering with have already been digested by the AI. It spits out some crap that does the thing I mostly want. ChatGPT 5+ is pretty good at navigating all the Web APIs so it was able to generate some WebAudio mini apps to start working with. The code looks like crap, so I hit it with a stick and get it to reorganize the code a little and write some comments, and then I can dive in and do the rest myself. It's a starting point, a prototype. It got me over the activation energy hump, and now I'm not so reluctant to actually try things out.
But like I said, I'm cynical. Right now the AI tools haven't been overly enshittified to the point they only serve their masters. Pretty soon they will be, and in ways we can't yet imagine.
Everything mentioned in the article was inevitable (maybe not in this exact form, but as a principle) for us fans of Jacques Ellul and Uncle Ted. At least the money was quite good for many in the industry, for some of them it still is.
I mean... of course it's not the future. It's the present. I have been seeing AI-generated posters and menus in real life on daily basis for about half a year. AI-upscale is completely normalized for average users. I don't know about other fields, but for graphic design we have well past this discussion.
Just like TikTok. The author doesn't think TikTok is inevitable, and I fully agree with them! But in our real timeline TikTok exists. So TikTok is, unquestionably, the present. Wide adoption of gen-AI is present.
It's technically not inevitable, sure. What has to happen for this not to become the future though, and what are the odds of that happening?
The rational choice is to act as if this was ensured to be the future. If it ends up not being the case, enough people will have made that mistake that your failure will be minuscule in the grand scheme of things, and if it's not and this is the future, you won't be left behind.
Sure beats sticking your feet in the sand and most likely fucking up or perhaps being right in the end, standing between the flames.
Another post arguing against the profit mechanism and modern commercial capitalism without realizing it. It's arguing against the symptoms. The problem is a system that incentivizes creating markets where a market was not needed and convincing people it will make their lives better. Yes, the problem is cultural, but it's also deeply ingrained in our economic protocol now. You can't scream into the void about specific problems and expect change unless you look at the root causes.
Following this thread takes you into political territory and governmental/regulatory capture, which I believe is the root issue that cannot be solved in late stage capitalism.
We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).
So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.
I think AR/VR definitely turns off a lot of people, me included. Further it is seen as anti-social and unnecessary to many. I think the lackluster sales prove the “not inevitable” point.
I don't necessarily disagree with the crux of your point. But I suspect the lackluster sales also had something to do with the $3,500 price tag. Meta has sold sold over 20x as many Oculus units.
For any point of the past, this is the future. It is not what is "best" by some metric, nor what is right, or should be, is what actually happens. It is like saying that evolution goal is intelligence or whatever, is what sticks in the wall.
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
Rather than pointlessly lament the destructive power of fire, we should be spending our time learning how to fire-proof our homes.
These are all natural forces, they may be human forces but they are still natural forces. We can't stop them, we can only mitigate them -- and we _can_ mitigate them, but not if we just stick our fingers in our ears and pretend it's not going to happen.
It absolutely is. Whatever you can imagine. All of it.
After reading and watching things by quite a few historians, one of the points that sticks out is: most things in history were not inevitable. Someone had to do them and other people had to help them or at least not oppose them.
A lot of history's turning points were much closer than we think.
It had huge impact on world history: it indirectly lead to German unification, it possibly lead to both world wars in the form we know them, it probably impacted colonial wars and as a result the territory of many former colonies, and probably also not their current populations (by determining where colonists came from and how many of them).
I'm fairly sure there were a few very close battles during the East India Company conquest of India, especially in the period when Robert Clive was in charge.
Another one for Germany: after Wilhem I died at 90, his liberal son Frederick III died aged only 56, after a reign of just 99 days. So instead Germany had Wilhem II as the emperor, a conservative that wrecked all of Bismark's successful foreign policies.
Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
I can't really remember many others right now, but every country and every continent has had moments like these. A lot of them are sheer bad luck but a good chunk are just miscalculation.
History is full of what-ifs, a lot of them with huge implications for the world.
> Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
Where's Japan getting the oil to fight USSR? The deposits are all too far east [1].
Even with the US out of the war we were denying them steel / oil but the US embargo is much less effective without a pacific navy.
Japan didn't really need to win the war directly, it just needed to put enough boots on the ground to topple the USSR by helping Germany. The Soviets couldn't afford to send a few army groups to East Asia, especially not in 1941 or 1942.
There are plenty "technology" things which have come to pass, most notably weapons, which have been developed which are not allowed to be used by someone to thier fullest due to laws, and social norms against harming others. Theese things are technology, and they would allow someone to attain wealth much more efficiently....
Parrots retort that they are regulated because society sees them as a threat.
Well, therein is the disconnect, society isn't immutable, and can come to those conclusions about other technologies tomorrow if it so chooses...
Maybe I am misunderstanding, but I disagree a whole lot. The whole problem is that is it inevitable. Technology is an enormous organism. It does not care about the ethical or moral considerations of it. It's a tug-of-war who can use most technique to succeed -- if you do not use it, you fall behind. Individuals absolutely can not shape the future of technology. States can attempt, but as they make use of technology for propaganda and similar reasons -- they are also in a requirement of it. It is inevitable as long as you keep digging.
Technology does nothing without humans enacting it, and humans do care about its ethical and moral considerations. Or at least some do, and everyone should. Individuals do collectively shape the future of technology.
A bit mean-spirited I think, but not unreasonable. organism: "a system or organization consisting of interdependent parts, compared to a living being". I removed the "living" part of it to allow for a bit more... abstract thinking.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
[1]https://en.wikipedia.org/wiki/Focal_point_(game_theory)
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
[1]: https://en.wikipedia.org/wiki/Cooperative_game_theory
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
For the entirety of the 2010's we had SaaS startups invading every space of software, for a healthy mix of better and worse, and all of them (and a number even today) are running the exact same playbook, boiled down to broad terms: burn investor money to build a massive network-effected platform, and then monetize via attention (some combo of ads, user data, audience reach/targeting). The problem is thus: despite all these firms collecting all this data (and tanking their public trust by both abusing it and leaking it constantly) for years and years, we really still only have ads. We have specifically targeted ads, down to downright abusive metrics if you're inclined and lack a soul or sense of ethics, but they are and remain ads. And each time we get a better targeted ad, the ones that are less targeted go down in value. And on and on it has gone.
Now, don't misunderstand, a bunch of these platforms are still perfectly fine business-wise because they simply show an inexpressible, unimaginable number of ads, and even if they earn shit on each one, if you earn a shit amount of money a trillion times, you'll have billions of dollars. However it has meant that the Internet has calcified into those monolith platforms that can operate that way (Facebook, Instagram, Google, the usuals) and everyone else either gets bought by them or they die. There's no middle-ground.
All of that to say: yes, on balance, we have more ads. However the advertising industry in itself has never been in worse shape. It's now dominated by those massive tech companies to an insane degree. Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV. Hell many are straight up shitty VHS replays of ads I saw in the fucking 90's, it's wild. We're now seeing AI video and audio dominate there too.
And going back to tech, the platforms stuff more ads into their products than ever and yet, they're less effective than ever. A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian, but the problem is nobody notices the background wallpaper, which means despite the saturation, ads get less attention then ever before. And worse still, the folks who don't block cost those ad companies impressions and resources to serve those ads that are being ignored.
So, to bring this back around: the coalition that makes ads "inevitable" isn’t consumers or creators, it's investors and platforms locked into the same anxiety‑economy business model. Cooperative resistance exists (ad‑blockers, subscription models, cultural fatigue), but it’s dwarfed by the sheer scale of capital propping up attention‑monetization. That’s why we see more ads even as they get less effective.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
It's not a question about the one who cannot influence the 7.9B. The question is disparity, which is like income disparity.
What happens when fewer and fewer people influence greater and greater numbers. Understanding the risk in that.
Weather predictions are just math, for example, and they are always wrong to some degree.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
Idk if that's true.
Navier–Stokes may yet be proven Turing-undecidable, meaning fluid dynamics are chaotic enough that we can never completely forecast them no matter how good our measurement is.
Inside the model, the Navier–Stokes equations have at least one positive Lyapunov exponent. No quantum computer can out-run an exponential once the exponent is positive
And even if we could measure every molecule with infintesimal resolution, the atmosphere is an open system injecting randomness faster than we can assimilate it. Probability densities shred into fractal filaments (butterfly effect) making pointwise prediction meaningless beyond the Lyapunov horizon
Just because we weren't able to discover all of the law of physics, doesn't mean they don't apply to us.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
> It's also true that you, personally, are just one of those 8B+ people
Unless you communicate and coordinate!
It is not a physics theory, that works regardless of our will.
Whether that word is reductionism is an exercise left to Chomsky?
They are at best an attempt to use our tools of reason and observation to predict nature, and you can point to thousands of examples, from market crashes to election outcomes, to observe how they can be flawed and fail to predict.
But don't forget, we discovered Nash Equilibrium, which changed how people behaved, in many interesting scenarios.
Also, from a purely Game Theoretic standpoint... All kinds of atrocities are justifiable, if it propagates your genes...
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
I thought this was the whole point of government IMO, to enact these kinds of guardrails via laws and regulations.
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
The regulatory, cultural, social, even educational factors surrounding these ideas are what could have made these not inevitable. But changes weren’t made, as there was no power strong enough to enact something meaningful.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
I totally agree with the message in the original post. Yes AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some and band aids for some other, but many are not even known yet.
It is essential to make AI original, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
At any rate, though there is some aversion to AI art for arts sake, the real aversion to AI art is that it squeezes one of the last viable options for people to become 'working artists' and funnels that extremely hard earned profit to the hands of the conglomerates that have enough compute to train generative models. Is making a living through your art something that we would like to value and maintain as a society? I'd say so.
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.
If you throw a computer in a trash compactor, that's a trivial amount of e-waste. If you throw a living person in a trash compactor, that's a moral tragedy.
These are all real people.
They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us, but to artists it's shocking and offensive.
Actual AI? Sure. The LLM slop we currently refer to as AI? lol, lmao even
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
Yes there are dangers associated to AI, but as a society we can deal with them, just like we've managed to deal with microbiology research, nuclear power, and guns.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software now be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing accepting eulas and letting strangers host things you rely on.
France and Germany are investing in open source (https://chipp.in/news/france-and-germany-launch-docs-an-open...), though perhaps not as aggressively as I've proposed. Let's join them.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
https://www.heart.org/en/news/2019/12/13/lovely-but-dangerou...
This is not a small thing for me. By burning wood instead of gas I gain a full week of groceries per month all year!
I acknowledge the risk of AI too, including human extinction. Weighing that, I still use it heavily. To stop me you'd have to compel me.
Probably the risk involved in cutting down trees is more than breathing in wood smoke. I'm no better at predicting which way a tree will fall than which horse will win.
I like the metaphor of burning wood, I also think it's going to be left for fun.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
But You do make a good point that those words are all potentially very loaded.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
You ever see those "dementia simulator" videos where the camera spins around and suddenly all the grocery store aisles are different? That's what it must be like to be less tech literate.
I blame GUIs. They disempower users and put them at mercy of UX "experts" who just rearrange the deck chairs when they get bored and then tell themselves how important they are.
https://suno.com/song/797be726-c1b5-4a85-b14a-d67363cd90e9
However, I support ~80 non-technical users for whom that update was a huge benefit. They're familiar with iOS on their phones, so the new interface is (whaddya know) intuitive for them. (I get fewer support calls, so it's of indirect benefit to me, too.) I try to let go of my frustration by reminding myself that learning new technology is (literally) part of my job description, but it's not theirs.
That doesn't excuse all the "moving the deck chairs" changes - Tahoe re-design: why? - but I think Apple's broad philosophy of ignoring power users like us and aligning settings interfaces was broadly correct.
Funny story: when my family first got a Windows computer (3.1, so... 1992 or '93?) my first reaction was "this sucks. Why can't I just tell the computer what to do anymore?" But, obviously, GUIs are the only way the vast majority will ever be able to interact with a device - and, you know, there are lots of tasks for which a visual interface is objectively better. I'd appreciate better CLI access to MacOS settings: a one-liner that mirrors to the most recently-connected display would save me so much fumbling. Maybe that's AppleScript-able? If I can figure it out I'll share here.
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
Computers are meant to be tools to expand our capabilities. You didn't do that. You replaced them. You didn't ride a bike, you called an Uber because you never learned to drive, or you were too lazy to do it for this use.
AI can augment skills by allowing for creative expressions - be it with AI stem separation, neural-network based distortion effects, etc. But the difference is those are tools to be used together with other tools to craft a thing. A tool can be fully automated - but then, if it is, you are no longer a artist. No more than someone that knows how to operate a CNC machine but not design the parts.
This is hard for some people to understand, especially those with an engineering or programming background, but there is a point to philosophy. Innate, valuable knowledge in how a thing was produced. If I find a stone arrow head buried under the dirt on land I know was once used for hunting by native Americans, that arrow head has intrinsic value to me because of its origin. Because I know it wasn't made as a replica and because I found it. There is a sliding scale, shades of gray here. An arrow head I had verified was actually old but which I did not find is still more valuable than one I know is a replica. Similarly, you can, I agree, slowly un-taint an AI work with enough input, but not fully. Similarly, if an digital artist painted something by hand then had StableDiffusion inpaint a small region as part of their process, that still bothers many, adds a taint of that tool to it because they did not take the time to do what the tool has done and mentally weigh each pixel and each line.
By using Suno, you're firmly in the "This was generated for me" side of that line for most people, certainly most musicians. That isn't riding a bike. That's not stretching your muscles or feeling the burn of the creative process. It's throwing a hundred dice, leaving the 6's up, and throwing again until they're all 6's. Sure, you have input, but I hardly see it as impressive. You're just a reverse centaur: https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
Ads are one of the oldest and most fundamental parts of a modern society.
Mixing obviously dumb things in with fundamental ones doesn't improve the point.
Worth getting on your radar if this stuff is of interesting: https://aria.org.uk/opportunity-spaces/collective-flourishin...
(full disclosure: I'm working with the programme director on helping define the funding programme, so if you're working on related problems, by all means share your thoughts on the site or by reaching out!)
Shout out to the Juicero example, because there are so many people out there showing that AI can be also "just squeeze the bag with your hands".
Do we really think LLMs and the generative AI craze would have not occurred if Sam Altman chose to stay at Y Combinator or otherwise got hit by a bus? People clearly like to interact with a seemingly smart digital agent, demonstrated as early as ELIZA in 1966 and SmarterChild in 2001.
My POV is that human beings have innate biases and preferences that tend to manifest what we invent and adopt. I don't personally believe in a supernatural God but many people around the world do. Alcoholic beverages have been independently discovered in numerous cultures across the world over centuries.
I think the best we can do is usually try to act according to our own values and nudge it in a direction we believe is best (both things OP is doing so this is not a dunk on them, just my take on their thoughts here).
None of the items is technically inevitable, but the world runs on capital, and capital alone. Tech advances are just a by product of capital snooping around trying to increase itself.
fMRI has always had folks highlighting how shaky the science is. It's not the strongest of experimental techniques.
People want things to be simpler, easier, frictionless.
Resistance to these things has a cost and generally the ROI is not worth it for most people as whole
nothing in real life is ideal, that just reality
The techies are drop in the ocean. You may build a new tech or device, but the adaption is driven by the crowd who just drift away without a pinch of resistance.
While this reaction is understandable, it is difficult to feel sympathy when so few people are willing to invest the time and effort required to actually understand how these systems work and how they might be used defensively. Mastery, even partial, is one of the few genuine avenues toward agency. Choosing not to pursue it effectively guarantees dependence.
Ironically, pointing this out often invites accusations of being a Luddite or worse.
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
Trivialities don't add anything to the discussion. The question is "Why?" and then "How do we change that?". Even incomplete or inaccurate attempts at answering would be far more valuable than a demonstration of hand-wringing powerlessness.
I do not think that the current philosophical world view will enable a different path. We've had resets or potential resets, COVID being a huge opportunity, but I think neither the public nor the political class had the strength to seize the moment.
We live in a world where we know the price of everything and the value of nothing. It will take dramatic change to put 'value' back where it belongs and relegate price farther down the ladder.
> But what is important to me is to keep the perspective of what consitutes a desirable future, and which actions get us closer or further from that.
Desirable to whom? I certainly don't think the status quo is perfect, but I do think dismissing it as purely the product of some faceless cadre of tech oligarchs desires is arrogant. People do have agency, the author just doesn't like what they have chosen to do with it...
Narratives are funny because they can be completely true and a total lie.
There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.
We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.
So I have a story I'd like to promote:
There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with open hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.
Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.
User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.
It's inevitable.
- McCabe (Kurt Russell), Vanilla Sky
AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.
As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.
> Not being in control of course makes people endlessy frustrated
I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.
I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of
1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.
An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.
It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.
This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!
But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.
Place the blame in the appropriate place. Something something, hammers don't kill people.
I've been thinking a lot lately, challenging some of my long-held assumptions...
Big tech, the current AI trend, social media websites serving up rage bait and misinformation (not to imply this is all they do, or that they are ALL bad), the current political climate and culture...
In my view, all of these are symptoms, and the cause is the perverse, largely unchallenged neoliberal world in which the West has spent the last 30-40 years (at least) living in.
Profit maximising comes before everything else. (Large) Corporate interests are almost never challenged. The result? Deliberately amoral public policy that serves the rich and powerful.
There are oases in this desert (which is, indeed, not inevitable), thankfully. As the author mentioned, there's FOSS. There's indie-created games/movies. There's everyday goodness between decent people.
The species as a whole will evolve inevitably; the individual animal may not.
I was hoping to find such a list within the article, i.e. which companies and products should we be supporting that are doing things 'the right way'?
But the inevitable is not a fact, it's a rigged fake that is, unfortunately, adapted by humans which flock in such large groups, echoing the same sentiments that it for those people seem real and inevitable.
Humans in general are extremely predictable, yet, so predictable that they seem utterly stupid and imbecile.
Imagine if the 80s and 90s had been PC vs Mac but you had to go to IBM for one or more critical pieces of software or software distribution infrastructure. The Cambrian explosion IBM-PC compatability didn’t happen overnight of course. I don’t think it will be (or ought to be) inevitable that phones remain opaque and locked down forever, but the day when freedom finally comes doesn’t really feel like it’s just around the corner.
Posted, alas for now, from my iPhone
It happened because IBM by mistake allowed it to happen. Big tech companies nowadays are very proficient at not repeating those mistakes.
There's a recording of an interview with Bill Gates floating around where he pretty much takes credit for that. He claims (paraphrasing because I listened to it almost 20 years ago) that he suggested a lot of the hardware to IBM because he knew he could repurpose DOS for it.
We’re two decades into the smartphone era and my hope is that we’re still in the DEC / VAX / S370 stage, with the “IBM-PC” stage just around the corner still to come.
https://reactos.org/
https://elementary.io/
Inevitable and being a holdout are conceptually different and you can't expect society as a whole to care or respect your personal space with regards to it.
They listed smartphones as a requirement an example. That is great, have fun with your flip phone, but that isn't for most people.
Just because you don't find something desirable doesn't mean you deserve extra attention or a special space. It also doesn't you can call people catering to the wants of the masses as "grifters".
https://kk.org/books/the-inevitable
Besides the bigger issue is that this blog post offers no concrete way forward.
I'm pretty cynical, but one ray of hope is that AI-assisted coding tools have really brought down the skill requirement for doing some daunting programming tasks. E.g. in my case, I have long avoided doing much web or UI programming because there's just so much to learn and so many deep rabbit holes to go down. But with AI tools I can get off the ground in seconds or minutes and all that gruddy HTML/JavaScript/CSS with bazillions of APIs that I could go spend time studying and tinkering with have already been digested by the AI. It spits out some crap that does the thing I mostly want. ChatGPT 5+ is pretty good at navigating all the Web APIs so it was able to generate some WebAudio mini apps to start working with. The code looks like crap, so I hit it with a stick and get it to reorganize the code a little and write some comments, and then I can dive in and do the rest myself. It's a starting point, a prototype. It got me over the activation energy hump, and now I'm not so reluctant to actually try things out.
But like I said, I'm cynical. Right now the AI tools haven't been overly enshittified to the point they only serve their masters. Pretty soon they will be, and in ways we can't yet imagine.
Just like TikTok. The author doesn't think TikTok is inevitable, and I fully agree with them! But in our real timeline TikTok exists. So TikTok is, unquestionably, the present. Wide adoption of gen-AI is present.
The rational choice is to act as if this was ensured to be the future. If it ends up not being the case, enough people will have made that mistake that your failure will be minuscule in the grand scheme of things, and if it's not and this is the future, you won't be left behind.
Sure beats sticking your feet in the sand and most likely fucking up or perhaps being right in the end, standing between the flames.
We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).
So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.
However AI is the future for programming that’s for sure.
Ignore it as a programmer to make yourself irrelevant.
I don't get the reason for this one being in the list. Is that an abusive product in some way?
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
What is the best way and how do we stop them?
“You take the blue pill, the story ends, you wake up in your bed and believe whatever you want to believe.”
These are all natural forces, they may be human forces but they are still natural forces. We can't stop them, we can only mitigate them -- and we _can_ mitigate them, but not if we just stick our fingers in our ears and pretend it's not going to happen.
It absolutely is. Whatever you can imagine. All of it.
A lot of history's turning points were much closer than we think.
https://en.wikipedia.org/wiki/Miracle_of_the_House_of_Brande...
It had huge impact on world history: it indirectly lead to German unification, it possibly lead to both world wars in the form we know them, it probably impacted colonial wars and as a result the territory of many former colonies, and probably also not their current populations (by determining where colonists came from and how many of them).
I'm fairly sure there were a few very close battles during the East India Company conquest of India, especially in the period when Robert Clive was in charge.
Another one for Germany: after Wilhem I died at 90, his liberal son Frederick III died aged only 56, after a reign of just 99 days. So instead Germany had Wilhem II as the emperor, a conservative that wrecked all of Bismark's successful foreign policies.
Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
I can't really remember many others right now, but every country and every continent has had moments like these. A lot of them are sheer bad luck but a good chunk are just miscalculation.
History is full of what-ifs, a lot of them with huge implications for the world.
Where's Japan getting the oil to fight USSR? The deposits are all too far east [1].
Even with the US out of the war we were denying them steel / oil but the US embargo is much less effective without a pacific navy.
[1]: https://old.reddit.com/r/MapPorn/comments/s1cbj6/a_1960s_map...
Yea, I remember the time when trillion dollar companies were betting the house on Juicero /s
There are plenty "technology" things which have come to pass, most notably weapons, which have been developed which are not allowed to be used by someone to thier fullest due to laws, and social norms against harming others. Theese things are technology, and they would allow someone to attain wealth much more efficiently....
Parrots retort that they are regulated because society sees them as a threat.
Well, therein is the disconnect, society isn't immutable, and can come to those conclusions about other technologies tomorrow if it so chooses...
Oh my, the shear number of philosophers, biologists, ethicists, and, for that matter, bacterium rotating in their graves.
Life might be a technology… many technologies are not only not-living, they’re mutually contradictory with life.