Reports of code's death are greatly exaggerated

(stevekrouse.com)

153 points | by stevekrouse 10 hours ago

27 comments

  • lateforwork 3 hours ago
    Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

    AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

    AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.

    You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.

    As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.

    [1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...

    • conception 2 minutes ago
      So the problem with Chris’ take is “This one for fun project didn’t produce anything particularly interesting.”

      So outside of the fact that we have magic now that can just produce “conventional “ compilers. Take it to a Moore’s Law situation. Start 1000 create a compiler projects- have each have a temperature to try new things, experiment, mutate. Collate - find new findings - reiterate- another 1000 runs with some of the novel findings. Assume this is effectively free to do.

      The stance that this - which can be done (albeit badly) today and will get better and/or cheaper - won’t produce new directions for software engineering seems entirely naive.

    • g9yuayon 3 minutes ago
      > Lattner found nothing innovative in the code generated by AI

      I don't think the replacement is binary. Instead, it’s a spectrum. The real concern for many software engineers is whether AI reduces demand enough to leave the field oversupplied. And that should be a question of economy: are we going to have enough new business problems to solve? If we do, AI will help us but will not replace us. If not, well, we are going to do a lot of bike-shedding work anyway, which means many of us will lose our jobs, without or without AI.

    • js8 24 minutes ago
      > AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

      Of course! But that's what makes them so powerful. In 99% of cases that's what you want - something that is conventional.

      The AI can come up with novel things if it has an agency, and can learn on its own (using e.g. RL). But we don't want that in most use cases, because it's unpredictable; we want a tool instead.

      It's not true that this lack of creativity implies lack of intelligence or critical thinking. AI clearly can reason and be critical, if asked to do so.

      Conceptually, the breakthrough of AI systems (especially in coding, but it's to some extent true in other disciplines) is that they have an ability to take a fuzzy and potentially conflicting idea, and clean up the contradictions by producing a working, albeit conventional, implementation, by finding less contradictory pieces from the training data. The strength lies in intuition of what contradictions to remove. (You can think of it as an error-correcting code for human thoughts.)

      For example, if I ask AI to "draw seven red lines, perpendicular, in blue ink, some of them transparent", it can find some solution that removes the contradictions from these constraints, or ask clarifying questons, what is the domain, so it could decide which contradictory statements to drop.

      I actually put it to Claude and it gave a beautiful answer:

      "I appreciate the creativity, but I'm afraid this request contains a few geometric (and chromatic) impossibilities: [..]

      So, to faithfully fulfill this request, I would have to draw zero lines — which is roughly the only honest answer.

      This is, of course, a nod to the classic comedy sketch by Vihart / the "Seven Red Lines" bit, where a consultant hilariously agrees to deliver exactly this impossible specification. The joke is a perfect satire of how clients sometimes request things that are logically or physically nonsensical, and how people sometimes just... agree to do it anyway.

      Would you like me to draw something actually drawable instead? "

      This clearly shows that AI can think critically and reason.

    • coldtea 8 minutes ago
      >Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

      "Needed to advance the state of the art" and actually deployed to do so are two different things. More likely either AI will learn to advance the state of the art itself, or the state of the art wont be advancing much anymore...

    • thesz 2 hours ago

        > ...generate answers near the center of existing thought.
      
      This is right in the Wikipedia's article on universal approximation theorem [1].

      [1] https://en.wikipedia.org/wiki/Universal_approximation_theore...

      "n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."

      And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."

      NNs, LLMs included, are interpolators, not extrapolators.

      And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.

      [2] https://github.com/NiluK/SolidGoldMagikarp

      • fasterik 1 hour ago
        It has been proven that recurrent neural networks are Turing complete [0]. So for every computable function, there is a neural network that computes it. That doesn't say anything about size or efficiency, but in principle this allows neural networks to simulate a wide range of intelligent and creative behavior, including the kind of extrapolation you're talking about.

        [0] https://www.sciencedirect.com/science/article/pii/S002200008...

        • gmueckl 15 minutes ago
          Turing conpleteness is not associated with crativity or intelligence in any ateaightforward manner. One cannot unconditionally imply the other.
    • omega3 16 minutes ago
      > And this is why humans will be needed to advance the state of the art.

      What percentage of developers advance the state of the art, what percentage of juniors advance the state of the art?

    • slopinthebag 2 hours ago
      Yeah I think he had a pretty sane take in that article:

      >CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.

      And also

      > The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.

    • Animats 2 hours ago
      I think this article was on HN a few days ago.
    • thunky 25 minutes ago
      > Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art

      And yet the AI probably did better than 99% of human devs would have done in a fraction of the time.

      • sheeshkebab 12 minutes ago
        Human devs rarely need to create compilers. Those that do would do much better job.

        what’s your point again?

    • peehole 1 hour ago
      LLMs still do forEach, it’s like wearing Tommy Hilfiger
      • coldtea 5 minutes ago
        That's one of the benefits of LLMs: they don't care for bullshit fashion
    • random3 40 minutes ago
      So AI won't surpass humans, because Chris Lattner can do better than a model than didn't exist two years ago?
    • bigstrat2003 1 hour ago
      > Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1].

      Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.

      • coldtea 2 minutes ago
        >Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on

        People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.

      • lateforwork 1 hour ago
        > don't have a shred of intelligence. ... They don't understand, only synthesize from the data they were trained on.

        Couldn't you say that about 99% of humans too?

        • chongli 51 minutes ago
          99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art. But it's a different 1% for every area of expertise! Add it all up and you get a lot more than 1% of humans contributing to the sum of knowledge.

          And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.

          • coldtea 0 minutes ago
            >99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art

            How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).

        • NewsaHackO 1 hour ago
          Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.
        • irishcoffee 1 hour ago
          The LLM was trained on 100% of humans, the 99% you’re scoffing at is feeding the LLM answers.
          • lateforwork 1 hour ago
            100% (or close to it) of material AI trains on was human generated, but that doesn't mean 100% of humans are generating useful material for AI training.
        • jryan49 1 hour ago
          Yes... maybe not 99%...
      • antonvs 1 hour ago
        You could say the same thing about Chris Lattner. How did he advance the state of the art with Swift? It’s essentially just a subjective rearranging of deck chairs: “I like this but not that.” Someone had to explain to Lattner why it was a good idea to support tail recursion in LLVM, for example - something he would have already known if he had been trained differently. He regurgitates his training just like most of us do.

        That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.

  • pacman128 4 hours ago
    In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
    • justonceokay 3 hours ago
      Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.

      I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.

      • sd9 3 hours ago
        LLMs are very much NIH machines
        • ffsm8 3 hours ago
          i'd go one step further, they're going to turbo charge the NIH syndrome and treat every code file as a seperate "here"
          • j_bum 2 hours ago
            For others like me who know “NIH” to be “National Institutes of Health”…

            “NIH” here refers to “Not Invented Here” Syndrome, or a bias against things developed externally.

      • realusername 3 hours ago
        The bar to create the new X framework has just been lowered so I expect the opposite, even more churn.
    • kstrauser 3 hours ago
      That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.

      Models can RTFM (and code) and do novel things, demonstrably so.

      • allthetime 3 hours ago
        Yeah. I work with bleeding edge zig. If you just ask Claude to write you a working tcp server with the new Io api, it doesn’t have any idea what it’s doing and the code doesn’t compile. But if you give it some minimal code examples, point it to the recent blog posts about it, and paste in relevant points from std it does incredibly well and produce code that it has not been trained on.
    • derrak 3 hours ago
      Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”

      Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?

      • adamiscool8 3 hours ago
        After thousands of years of research we still don’t fully understand how humans do it, so what reason (besides a sort of naked techno-optimism) is there to believe we will ever be able to replicate the behavior in machines?
        • derrak 3 hours ago
          The Church-Turing thesis comes to mind. It would at least suggest that humans aren’t capable of doing anything computationally beyond what can be instantiated in software and hardware.

          But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.

          • sophrosyne42 3 hours ago
            The church turing thesis is about following well-defined rules. It is not about the system that creates or decides to follow or not follow such rules. Such a system (the human mind) must exist for rules to be followed, yet that system must be outside mere rule-following since it embodies a function which does not exist in rule-following itself, e.g., the faculty of deciding what rules are to be followed.
            • derrak 2 hours ago
              We can keep our discussion about church turing here if you want.

              I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.

        • ben_w 3 hours ago
          That humans come in various degrees of competence at this rather than an, ahem, boolean have/don't have; plus how we can already do a bad approximation of it, in a field whose rapid improvements hint that there is still a lot of low-hanging fruit, is a reason for techno-optimism.
        • stravant 2 hours ago
          Thousands of years?

          We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).

          • thesz 2 hours ago
            And then we discover that DNA in (not only brain) cells are ideal quantum computers, DNA's reactions generate coherent light (as in lasers) used to communicate between cells and single dendrite of cerebral cortex' neuron can compute at the very least a XOR function which requires at least 9 coefficients and one hidden layer. Neurons have from one-two to dozens of thousands of dendrites.

            Even skin cells exchange information in neuron-like manner, including using light, albeit thousands times slower.

            This switches complexity of human brain to "86 billions quantum computers operating thousands of small neural networks, exchanging information by lasers-based optical channels."

      • analog31 1 hour ago
        >>> But do we have reason to believe that no AI system can synthesize novel technologies

        We don’t even know if they want to. But in general, it’s impossible to conclusively prove that something won’t ever happen in the future.

      • sophrosyne42 3 hours ago
        Its not an assumption, it is a fact about how computers function today. LLMs interpolate, they do not extrapolate. Nobody has shown a method to get them to extrapolate. The insistence to the contrary involves an unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
        • derrak 2 hours ago
          As long as agnosticism is the attitude, that’s fine. But we shouldn’t let mythology about human intelligence/computational capacity stop us from making progress toward that end.

          > unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.

          For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.

      • somewhereoutth 1 hour ago
        In the grand scale of things, a computer is not much more than a fancy brick. Certainly it is much closer to a brick than to a human. So the question is more 'why should this particularly fancy brick have abilities that so far we have only encountered in humans?'
        • djeastm 0 minutes ago
          > fancy brick

          If we're going to be reductionist we can just call humans "meat sacks" and flip the question around entirely.

        • derrak 27 minutes ago
          > Certainly it is much closer to a brick than to a human.

          I disagree with this premise. A computer approximates a Turing Machine, which puts it far above a brick.

      • danaris 3 hours ago
        That's irrelevant.

        The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."

        The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."

        Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.

        Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.

        • derrak 3 hours ago
          Note that I prefaced my comment by saying the parent might be right about LLMs.

          > That's irrelevant.

          My comment was relevant, if a bit tangential.

          Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.

    • jedberg 3 hours ago
      People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.

      For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.

      The skill itself is pretty much just the documentation and some code examples.

      • majormajor 2 hours ago
        Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?

        How long can you keep adding novel things into the start of every session's context and get good performance, before it loses track of which parts of that context are relevant to what tasks?

        IMO for working on large codebases sticking to "what the out of the box training does" is going to scale better for larger amounts of business logic than creating ever-more not-in-model-training context that has to be bootstrapped on every task. Every "here's an example to think about" is taking away from space that could be used by "here is the specific code I want modified."

        The sort of framework you mention in a different reply - "No, it was created by our team of engineers over the last three years based on years of previous PhD research." - is likely a bit special, if you gain a lot of expressibility for the up-front cost, but this is very much not the common situation for in-house framework development, and could likely get even more rare over time with current trends.

        • jedberg 2 hours ago
          > Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?

          Today, yes. I assume in the future it will be integrated differently, maybe we'll have JIT fine-tuning. This is where the innovation for the foundation model providers will come in -- figuring out how to quickly add new knowledge to the model.

          Or maybe we'll have lots of small fine tuned models. But the point is, we have ways today to "teach" models about new things. Those ways will get better. Just like we have ways to teach humans new things, and we get better at that too.

          A human seeing a new programming language still has to apply previous knowledge of other programming languages to the problem before they can really understand it. We're making LLMs do the same thing.

      • andrei_says_ 3 hours ago
        The question is, who made the new framework? Was it vibe coded by someone who does not understand its code?
        • jedberg 3 hours ago
          No, it was created by our team of engineers over the last three years based on years of previous PhD research.
      • NewJazz 3 hours ago
        A framework is different than a paradigm shift or new language.
        • jedberg 3 hours ago
          Yes and no. How does a human learn a new language? They use their previous experience and the documentation to learn it. Oftentimes they way someone learns a new language is they take something in an old language and rewrite it.

          LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.

          • NewJazz 2 hours ago
            And LLMs will get retrained eventually. So writing one good spec and a great harness (or multiple) might be enough, eventually.
    • fritzo 2 hours ago
      The same could be asked about people. The answer is social intelligence.
    • lifis 3 hours ago
      You can have the LLM itself generate it based on the documentation, just like a human early adopter would
    • pklausler 3 hours ago
      This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.
    • CamperBob2 3 hours ago
      In a chat bot coding world, how do we ever progress to new technologies?

      Funny, I'd say the same thing about traditional programming.

      Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.

      That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.

      All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.

      • badc0ffee 3 hours ago
        You're probably using an IDE that checks your syntax as you type, highlighting keywords and surfacing compiler warnings and errors in real time. Autocomplete fills out structs for you. You can hover to get the definition of a type or a function prototype, or you can click and dig in to the implementation. You have multiple files open, multiple projects, even.

        Not to mention you're probably also using source control, committing code and switching between branches. You have unit tests and CI.

        Let's not pretend the C developer experience is what it was 30 years ago, let alone 50.

        • CamperBob2 3 hours ago
          I disagree that any of those things are even slightly material to the topic. It's like saying my car is fundamentally different from a 1972 model because it has ABS, airbags, and a satnav.

          Reply due to rate limiting:

          K&R didn't know about CI/CD, but everything else you mention has either existed for over 30 years or is too trivial to argue about.

          Conversely, if you took Claude Code or similar tools back to 1996, they would grab a crucifix and scream for an exorcist.

          • badc0ffee 3 hours ago
            You said C developers are doing things the "same old way" as always.

            I think you're taking for granted the massive productivity boost that happened even before today's era of LLM agents.

      • sophrosyne42 2 hours ago
        If all problems were solved, we should have already found a paradise without anything to want for. Your editing workflow being the similar to another for a 1970s era language does not have any relevance to that question.
        • CamperBob2 1 hour ago
          If all problems were solved

          Now that's extrapolation of the sort that, as you point out elsewhere, no LLM can perform.

          At least, not one without serious bugs.

      • bitwize 2 hours ago
        We were almost there, back in the 80s.

        A vice president at Symbolics, the Lisp machine company at their peak during the first AI hype cycle, once stated that it was the company's goal to put very large enterprise systems within the reach of small teams to develop, and anything smaller within the reach of a single person.

        And had we learned the lessons of Lisp, we could have done it. But we live in the worst timeline where we offset the work saved with ever worse processes and abstractions. Hell, to your point, we've added static edit-compile-run cycles to dynamic, somewhat Lisp-like languages (JavaScript)! And today we cry out "Save us, O machines! Save us from the slop we produced that threatens to make software development a near-impossible, frustrating, expensive process!" And the machines answer our cry by generating more slop.

      • rustystump 3 hours ago
        While i dont disagree with the larger point here i do disagree that all the code we ever need has been written. There are still soooooo many new things to uncover in that domain.
    • danielbln 3 hours ago
      Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
    • charcircuit 3 hours ago
      You can just have AI generate its own synthetic data to train AI with. If you want knowledge about how to use it to be in the a model itself.
  • picafrost 3 hours ago
    So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.

    Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.

    • oblio 3 hours ago
      Those are business goals that don't just go away because tech changes.
      • fritzo 2 hours ago
        Those business goals will soon realize they need more electricity. More brains will be devoted to power generation.
      • picafrost 3 hours ago
        Of course. But LLMs may subtract the need for top talent to be working on them.
      • tossandthrow 1 hour ago
        Likely, but through regulation, not AI
  • randcraw 1 hour ago
    Krouse points to a great article by Simon Willison who proposes that the killer role for vibe coding (hopefully) will be to make code better and not just faster.

    By generating prototypes that are based on different design models each end product can be assessed for specific criteria like code readability, reliability, or fault tolerance and then quickly be revised repeatedly to serve these ends better. No longer would the victory dance of vibe coding be simply "It ran!" or "Look how quickly I built it!".

  • bluGill 1 hour ago
    A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)

    i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.

    • coffeefirst 55 minutes ago
      So I would like to know how it found the proof. Because it’s much more likely to have been plucked from an obscure record where the author didn’t realize this was special than to have been estimated on the fly.

      This makes LLMs incredibly powerful research tools, which can create the illusion of emergent capabilities.

    • lateforwork 1 hour ago
      > as the space is already well covered

      The US patent commissioner in 1899 wanted to shutdown the patent office because "everything that can be invented has been invented." And yet, human ingenuity keeps proving otherwise.

      • bluGill 1 hour ago
        There are lots of small innovations left. Only a few patents have ever been for revolutions. Small innovaions add up to big things.
      • appletrotter 1 hour ago
        This is apocraphyl :(
    • 3836293648 1 hour ago
      You could probably do it in a few days, C is not that hard to compile
      • bluGill 1 hour ago
        Claude built an optimizer as well. (Not a great one) that takes a lot more. Yes I could lively brute force a C compiler that works much faster.
      • lateforwork 1 hour ago
        Right, and that was a design goal of C language... to be close to the machine.
  • ljlolel 20 minutes ago
    Code will be replaced by EnglishScript running on ClaudeVM https://jperla.com/blog/the-future-is-claudevm
  • flitzofolov 3 hours ago
    r0ml's third law states that: “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”

    I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".

    Source: https://www.youtube.com/watch?v=h5fmhYc4U-Y

  • idopmstuff 4 hours ago
    I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.

    But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.

    I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.

    But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.

    (There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)

    • cactusplant7374 3 hours ago
      > But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English.

      Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.

      • idopmstuff 3 hours ago
        Yeah, I don't really accept the argument that AI makes mistakes and therefore cannot be trusted to write production code (in general, at least - obviously depends on the types of mistakes, which code, etc.).

        The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.

        Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.

        I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).

        • Fishkins 3 hours ago
          > humans also make mistakes

          This is broadly true, but not comparable when you get into any detail. The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.

          IME, all of the QA measures you mention are more difficult and less reliable than understanding things properly and writing correct code from the beginning. For critical production systems, mediocre code has significant negative value to me compared to a fresh start.

          There are plenty of net-positive uses for AI. Throwaway prototyping, certain boilerplate migration tasks, or anything that you can easily add automated deterministic checks for that fully covers all of the behavior you care about. Most production systems are complicated enough that those QA techniques are insufficient to determine the code has the properties you need.

          • bdangubic 3 hours ago
            > The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.

            my experience literal 180 degrees from this statement. and you don’t normally get the choose humans you work with, some you may be involved in the interview process but that doesn’t tell you much. I have seen so much human-written code in my career that, in the right hands, I’ll take (especially latest frontier) LLM written code over average human code any day of the week and twice on Sunday

        • bigstrat2003 1 hour ago
          Humans also make mistakes, but unlike LLMs, they are capable of learning from their mistake and will not repeat it once they have learned. That, not the capacity to make mistakes, is why you should not allow LLMs to do things.
      • bryanrasmussen 3 hours ago
        most human bugs are caused by failures in reasoning though, not by just making something up to leap to the conclusion considered most probable, so not sure if the comparison makes sense.
        • wiseowise 3 hours ago
          > most human bugs are caused by failures in reasoning though

          Citation needed.

          • bryanrasmussen 3 hours ago
            sorry, that is just taken from my experience, and perhaps I am considering reasoning to be a broader category than others might.

            To be lenient I will separate out bugs caused by insufficient knowledge as not being failures in reasoning, do you have forms of bugs that you think are more common and are not arguably failures in reasoning that should be considered?

            on edit: insufficient knowledge that I might not expect a competent developer to have is not a failure in reasoning, but a bug caused by insufficient knowledge that I would expect a competent developer in the problem space to have is a failure in reasoning, in my opinion on things.

  • deadbabe 4 hours ago
    My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.

    I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

    • oooyay 4 hours ago
      > I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

      You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.

      • noident 3 hours ago
        Shift-left was a disaster? A large number of my day to day problems at work could be described as failing to shift-left even in the face of overwhelmingly obvious benefits
      • oblio 3 hours ago
        Shift left?

        Edit: I've googled it and I can't find anything relevant. I've been working in software for 20+ years and read a myriad things and it's the first time I hear about it...

    • noelsusman 1 hour ago
      Well you're trying to convince them to reject their actual experience. Better tooling and better models have indeed solved a lot of the limitations models faced a couple years ago.

      I also believe coding isn't going to disappear, but AI skeptics have been mostly doing a combination of moving the goalposts and straight up denial over the last few years.

      • jcranmer 7 minutes ago
        I've been trying out AI over the past month (mostly because of management trying to force it down my throat), and have not found it to be terribly conducive to actually helping me on most tasks. It still evidences a lot of the failure modes I was talking about 3 years ago. And yet the entire time, it's the AI boosters who keep trying to say that any skepticism is invalid because it's totally different than how it was three months ago.

        I haven't seen a lot of goalpost moving on either side; the closest I've seen is from the most hyperbolic of AI supporters, who are keeping the timeline to supposed AGI or AI superintelligence or whatnot a fairly consistent X months from now (which isn't really goalpost-moving).

    • stalfie 2 hours ago
      Well, to be fair, judging by the shift in the general vibes of the average HN comment over the past 3 years, better use of agents and advanced models DID solve the previous temporary setbacks. The techno-optimists were right, and the nay-sayers wrong.

      Over the course of about 2 years, the general consensus has shifted from "it's a fun curiosity" to "it's just better stackoverflow" to "some people say it's good" to "well it can do some of my job, but not most of it". I think for a lot of people, it has already crossed into "it can do most of my job, but not all of it" territory.

      So unless we have finally reached the mythical plateau, if you just go by the trend, in about a year most people will be in the "it can do most of my job but not all" territory, and a year or two after that most people will be facing a tool that can do anything they can do. And perhaps if you factor in optimisation strategies like the Karpathy loop, a tool that can do everything but better.

      Upper managment might be proven right.

      • dwaltrip 1 hour ago
        If self-driving is any indication, it may take 10+ years to go from 90% to 95%.
    • idopmstuff 3 hours ago
      As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.

      I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.

      Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"

      I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.

      Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.

      Some specific thoughts for you:

      1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.

      2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.

      3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?

      4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.

      • pixl97 3 hours ago
        It's even better when you guide them into finding the fatal flaw for themselves.
        • idopmstuff 3 hours ago
          Hahaha yes this is absolutely true but often times so much more work.
      • two_tasty 1 hour ago
        Very well said. So many engineers balk at "coming off as positive" as a form of lying or as a pointless social ritual, but it's the only thing that gets you a seat at the table. Engineers who say "no" or "that's stupid" are never seen as leaders by management, even if they're right. The approach you laid out here is how you have _real_ impact as an engineering leader, because you keep getting a seat at the table to steer what actually happens.
    • cratermoon 3 hours ago
      To an extent, these people have found their religion, and rational discussion does not come into play. As with previous tech Holy Wars over operating systems, editors, and programming languages, their self-image is tied to the technology.

      Where the tech argument doesn't apply to upper management, business practices, the need to "not be left behind" and leap at anything that promises reducing headcount without reducing revenue, money talks. As long as it's possible to slop something together, charge for it, and profit, slop will win.

  • woeirua 2 hours ago
    The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”

    First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.

    Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.

    • anematode 2 hours ago
      In agree in principle, but the compiler is a terrible example given the amount of scaffolding afforded to the LLMs, literally hundreds of thousands of test cases covering all kinds of esoteric corners.

      Also (and this is coming from someone who thinks it's quite close) "AGI" is not implied by the ability to implement very-long-horizon software tasks. That's not "general" at all.

    • stevekrouse 1 hour ago
      That's not my argument at all! Though I can see why you took that away; my bad for not making my argument clearer.

      I believe that even when we have AGI, code will still be super valuable because it'll be how we get precise abstractions into human heads, which is necessary for humans to be able to bring informed opinions to bear.

  • erichocean 4 hours ago
    > If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.

    Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...

    • stevekrouse 3 hours ago
      Sick!!! Great example! I'm actually a longtime friend and angel investor in Dustin but I hadn't seen this
  • _pdp_ 1 hour ago
    Remember Deep Thought, the greatest computer ever built that spent 7.5 million years computing the Answer to the Ultimate Question of Life, the Universe, and Everything? The answer was 42, perfectly correct, utterly useless because nobody understood the question they were asking.

    That's what happens when you hand everything to a machine without understanding the problem yourself.

    AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.

    True understanding is indistinguishable from doing.

    • bitwize 10 minutes ago
      Well, yes, but AI can also give you wildly incorrect answers with alarming frequency.

      I know, I know, "skill issue"/"you're holding it wrong". And maybe that's vacuously true, in that it's so hard to guess what will produce correct output, because LLMs are not an abstraction layer in the way that we're used to. Prior abstraction layers related input to output via a transparent homomorphism: the output produced for an input was knowable and relatively straightforward (even with exotic optimization flags). LLMs are not like that. Your input disappears into a maze of twisty little matmuls, all alike (a different maze per run, for the same input!) and you can't relate what comes out the other end in terms of the input except in terms of "vibes". So to get a particular output, you just have to guess how to prompt it, and it is not very helpful if you guess wrong except in providing a wrong (often very subtly so) response!

      Back in the day, I had a very primitive, rinky-dink computer—a VIC-20. The VIC-20 came with one of the best "intro to programming" guides a kid could ask for. Regarding error messages it said something like this: "If your VIC-20 tells you something like ?SYNTAX ERROR, don't worry. You haven't broken it. Your VIC-20 is trying to help you correct your mistakes." 8-bit 6502 at 1 MHz. 5 KiB of RAM. And still more helpful than a frontier model when it comes to getting your shit right.

  • gedy 3 hours ago
    When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...

    I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".

    I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!

    • justonceokay 3 hours ago
      You just brought me back to my first internship where as interns we were asked to hand-manipulate a 30k lines auto-generated SOAP API definition because we lost the license to the software that generated it
    • drzaiusx11 3 hours ago
      Oh the memories, but at least that generated code was deterministic...
  • rvz 10 hours ago
    From "code" to "no-code" to "vibe coding" and back to "code".

    What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.

    We have a term for that already and it is called "comprehension debt". [0]

    With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.

    This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).

    [0] https://addyosmani.com/blog/comprehension-debt/

    [1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...

    [2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...

    • suzzer99 4 hours ago
      > With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.

      I've had to work on multiple legacy systems like this where the original devs are long gone, there's no documentation, and everyone at the company admits it's complete mess. They send you off with a sympathetic, "Good luck, just do the best you can!"

      I call it "throwing dye in the water." It's the opposite of fun programming.

      On the other hand, it often takes creativity and general cleverness to get the app to do what you want with minimally-invasive code changes. So it should be the hardest for AI.

    • Insanity 4 hours ago
      While I agree with everything you said, Amazon’s problems aren’t just Kiro messing up. It’s a brain drain due to layoffs, and then people quitting because of the continuous layoff culture.

      While publicly they might say this is AI driven, I think that’s mostly BS.

      Anyway, that doesn’t take away from your point, just adds additional context to the outages.

    • zer00eyz 4 hours ago
      > We have a term for that already and it is called "comprehension debt".

      This isn't any different than the "person who wrote it already doesn't work here any more".

      > now requiring engineers to manually review AI changes [2] (which slows them down even with AI).

      What does this say about the "code review" process if people cant understand the things they didn't write?

      Maybe we have had the wrong hiring criteria. The "leet code", brain teaser (FAANG style) write some code interview might not have been the best filter for the sorts of people you need working in your org today.

      Reading code, tooling up (debuggers, profilers), durable testing (Simulation, not unit) are the skill changes that NO ONE is talking about, and we have not been honing or hiring for.

      No one is talking about requirements, problem scoping, how you rationalize and think about building things.

      No one is talking about how your choice of dev environment is going to impact all of the above processes.

      I see a lot of hype, and a lot of hate, but not a lot of the pragmatic middle.

      • xienze 4 hours ago
        > This isn't any different than the "person who wrote it already doesn't work here any more".

        Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.

        • zer00eyz 3 hours ago
          > Yeah but that takes years to play out.

          https://www.invene.com/blog/limiting-developer-turnover has some data, that aligns with my own experience putting the average at 2 years.

          I have been doing this a long time: my longest running piece of code was 20 years. My current is 10. Most of my code is long dead and replaced because businesses evolve, close, move on. A lot of my code was NEVER ment to be permanent. It solved a problem in a moment, it accomplished a task, fit for purpose and disposable (and riddled with cursing, manual loops and goofy exceptions just to get the job done).

          Meanwhile I have seen a LOT of god awful code written by humans. Business running on things that are SO BAD that I still have shell shock that they ever worked.

          AI is just a tool. It's going from hammers to nail guns. The people involved are still the ones who are ultimately accountable.

  • rglover 3 hours ago
    It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.

    Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.

    • derrak 3 hours ago
      > non-deterministic printer.

      I interpret non-deterministic here as “an LLM will not produce the same output on the same input.” This is a) not true and b) not actually a problem.

      a) LLMs are functions and appearances otherwise are due to how we use them

      b) lots of traditional technologies which have none of the problems of LLMs are non-deterministic. E.g., symbolic non-deterministic algorithms.

      Non-determinism isn’t the problem with LLMs. The problem is that there is no formal relationship between the input and output.

  • soumyaskartha 4 hours ago
    Every few years something is going to kill code and here we are. The job changes, it does not disappear.
    • suzzer99 4 hours ago
      For future greenfield projects, I can see a world where the only jobs are spec-writer and test-writer, with maybe one grumpy expert coder (aka code janitor) who occasionally has to go into the code to figure out super gnarly issues.
      • cratermoon 3 hours ago
        A good spec-writer, as the article notes, is writing code.
      • drzaiusx11 3 hours ago
        This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.

        Maybe I should just retire a few years early and go back to fixing cars...

        • suzzer99 3 hours ago
          Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.

          Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.

          • allthetime 3 hours ago
            I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.

            Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed

      • hrmtst93837 1 hour ago
        [dead]
  • cratermoon 2 hours ago
    I can't tell if the author's "when we get AGI" is sarcasm or genuine.
  • cratermoon 3 hours ago
    Yet again we can pull out Edsger W.Dijkstra's 1978 article, "On the foolishness of "natural language programming""

    "In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."

    • stevekrouse 1 hour ago
      Such a perfect quote! Thank you! Will add it to my collection
    • woeirua 2 hours ago
      Djikstra wasn’t a god. He’s going to be wrong on this one.
      • bigstrat2003 1 hour ago
        He's not wrong. People are just drinking the AI kool-aid too hard to realize that the emperor has no clothes.
        • esafak 24 minutes ago
          How come it works with humans? Give seasoned engineers a spec and they'll create a working product. Many software companies are created and guided on the verbal directions of people who don't code.
    • bitwize 2 hours ago
      Dijkstra also mockingly described software engineering as "the doomed discipline" because its goal was to determine "how to program if you cannot".

      "How to program if you cannot" has been solved now.

  • aplomb1026 4 hours ago
    [dead]
  • Plutarco_ink 3 hours ago
    [dead]
  • jee599 2 hours ago
    [dead]
  • lucasay 4 hours ago
    [dead]
  • developic 10 hours ago
    What is this
  • pjmlp 1 hour ago
    This is coping, with tools like Boomi, n8n, Langflow, and similar, there are plenty of automated tasks that can already be configured and that's it.
  • neversupervised 1 hour ago
    The author’s intuition is still backward calibrated, even though he talks about the future. He doesn’t have an intuition for the future. All code will be AI generated. There’s no way to compete with the AI. And whatever new downsides this brings will be solved in ways we aren’t fully anticipating. But the solution is not to walk back vibecoding. You have to be blind to believe not most code will be vibecoded very soon.
    • lionkor 1 hour ago
      You have to be incredibly incompetent and naiive to look at the absolute garbage theatre that AI outputs today to go "yeah this will write all future code".

      Usually the response, for the last years, has been "no no you don't get it, it'll get so much better" and then they make the context window slightly larger and make it run python code to do math.

      What will really happen is that you and people like you will let Claude or some other commerical product write code, which it then owns. The second Claude becomes more expensive, you will pay, because all your tooling, your "prompts saved in commits" etc. will not work the same with whatever other AI offer.

      You've just reinvented vendor lock in, or "highly paid consultant code", on a whole new level.

    • general_reveal 1 hour ago
      Well, one thing I'll say is... If for whatever reason we have an electrical issue, or just general chip scarcity, then all programmers with experience will be the ones that can bail out society. Just saying. Especially because kids today won't really learn coding. Its a FAFA situation. Stay sharp!
  • lionkor 1 hour ago
    To all the vibe coders:

    When you let an LLM author code, it takes ownership of that code (in the engineering sense).

    When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?

    You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.

    You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.

    You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.

    Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?

    • thi2 3 minutes ago
      It is the same as adding dependencies or hosting on Azure/Aws, choosing a nosql db isn't it?
    • threethirtytwo 57 minutes ago
      I was locked into apple chips, amd chips and intel chips long ago. Everyone is already locked into one of these companies.

      The fact of reality is that the technology is so complex only for-profit centralized powers can really create these things. Linux and open source was a fluke and even then open source developers need closed source jobs to pay for their time doing open source.

      We are locked in and this is the future. Accept it or deny it one is delusional the other is reality. The world is transforming into vibe coding whether you like it or not. Accept reality.

      If you love programming, if you care for the craft. If programming is a form of artistry for you, if programming is your identity and status symbol. Then know that under current trends… all of that is going into the trash. Better rebuild a new identity quick.

      A lot of delusional excuse scaffolds people build around themselves to protect their identity is they just say “the hard part of software wasn’t really programming” which is kind of stupid because AI covers the hard part too. But this is excuse is more viable then “ai is useless slop”

    • echelon 50 minutes ago
      > When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?

      They'll hire the person who knows AI, not the human clinging onto claims of artisanal character by character code.

      That is until we're all replaced by full agentic engineering. And that's coming.