The peril of laziness lost

(bcantrill.dtrace.org)

239 points | by gpm 3 hours ago

22 comments

  • abcde666777 1 minute ago
    Being a somewhat lazy individual myself, I'm wary of this statement. It feels too... comforting. "It's okay that I wasn't productive today, because laziness has merits".

    I consider my laziness a part of who I am, and I don't demonize it, but I also don't consider it my ally - to get the things I care about done I often have to actively push against it.

  • btrettel 2 hours ago
    Similar to bragging about LOC, I have noticed in my own field of computational fluid dynamics that some vibe coders brag about how large or rigorous their test suites are. The problem is that whenever I look more closely into the tests, the tests are not outstanding and less rigorous than my own manually created tests. There often are big gaps in vibe coded tests. I don't care if you have 1 million tests. 1 million easy tests or 1 million tests that don't cover the right parts of the code aren't worth much.
    • CJefferson 2 minutes ago
      Yes, I've found tests are the one thing I need to write. I then also need to be sure to keep 'git diff'ing the tests, to make sure claude doesn't decide to 'fix' the tests when it's code doesn't work.

      When I am rigourous about the tests, Claude has done an amazing job implementing some tricky algorithms from some difficult academic papers, saving me time overall, but it does require more babysitting than I would like.

    • colechristensen 1 hour ago
      It's a struggle to get LLMs to generate tests that aren't entirely stupid.

      Like grepping source code for a string. or assert(1==1, true)

      You have to have a curated list of every kind of test not to write or you get hundreds of pointless-at-best tests.

      • btrettel 30 minutes ago
        What I've observed in computational fluid dynamics is that LLMs seem to grab common validation cases used often in the literature, regardless of the relevance to the problem at hand. "Lid-driven cavity" cases were used by the two vibe coded simulators I commented on at r/cfd, for instance. I never liked the lid-driven cavity problem because it rarely ever resembles an actual use case. A way better validation case would be an experiment on the same type of problem the user intends to solve. I think the lid-driven cavity problem is often picked in the literature because the geometry is easy to set up, not because it's relevant or particularly challenging. I don't know if this problem is due to vibe coders not actually having a particular use case in mind or LLMs overemphasizing what's common.

        LLMs seem to also avoid checking the math of the simulator. In CFD, this is called verification. The comparisons are almost exclusively against experiments (validation), but it's possible for a model to be implemented incorrectly and for calibration of the model to hide that fact. It's common to check the order-of-accuracy of the numerical scheme to test whether it was implemented correctly, but I haven't seen any vibe coders do that. (LLMs definitely know about that procedure as I've asked multiple LLMs about it before. It's not an obscure procedure.)

      • gpm 43 minutes ago
        > have a curated list of every kind of test not to write

        I've seen a lot of people interact with LLMs like this and I'm skeptical.

        It's not how you'd "teach" a human (effectively). Teaching (humans) with positive examples is generally much more effective than with negative examples. You'd show them examples of good tests to write, discuss the properties you want, etc...

        I try to interact with LLMs the same way. I certainly wouldn't say I've solved "how to interact with LLMs" but it seems to at least mostly work - though I haven't done any (pseudo-)scientific comparison testing or anything.

        I'm curious if anyone else has opinions on what the best approach is here? Especially if backed up by actual data.

        • jerf 0 minutes ago
          It's going to be difficult for anyone to have any more "data" than you already do. It's early days for all of us. It's not like there's anyone with 20 years of 2026 AI coding assistant experience.

          However we can say based on the architecture of the LLMs and how they work that if you want them to not do something, you really don't want to mention the thing you don't want them to do at all. Eventually the negation gets smeared away and the thing you don't want them to do becomes something they consider. You want to stay as positive as possible and flood them with what you do want them to do, so they're too busy doing that to even consider what you didn't want them to do. You just plain don't want the thing you don't want in their vector space at all, not even with adjectives hanging on them.

  • suzzer99 2 hours ago
    > Generally, though, most of us need to think about using more abstraction rather than less.

    Maybe this was true when Programming Perl was written, but I see the opposite much more often now. I'm a big fan of WET - Write Everything Twice (stolen from comments here), then the third time think about maybe creating a new abstraction.

    • badlucklottery 2 hours ago
      >WET - Write Everything Twice

      I've always heard this as the "Rule of three": https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

    • dasil003 1 hour ago
      Totally agree with this, the beauty of software is the right abstractions have untold impact, spanning many orders of magnitude. I'm talking about the major innovations, things like operating systems, RDBMS, cloud orchestration. But the majority of code in the world is not like that, it's just simple business logic that represents ideas and processes run by humans for human purposes which resist abstraction.

      That doesn't people from trying though, platform creation is rife within big tech companies as a technical form of empire building and career-driven development. My rule of thumb in tech reviews is you can't have a platform til you have three proven use cases and shown that coupling them together is not a net negative due to the autonomy constraint a shared system imposes.

    • layer8 1 hour ago
      More than twice is a rather low bar, I don’t think that it conflicts with the quote from Programming Perl.
    • raincole 43 minutes ago
      I agree. It's crazy how many layers of abstraction have been created since 1991 (when Programming Perl was published.)
    • HarHarVeryFunny 58 minutes ago
      Writing twice makes sense if time permits, or the opportunity presents itself. First time may be somewhat exploratory (maybe a thow-away prototype), then second time you better understand the problem and can do a better job.

      A third time, with a new abstraction, is where you need to be careful. Fred Brooks ("Mythical Man Month") refers to it as the "second-system effect" where the confidence of having done something once (for real, not just prototype) may lead to an over-engineered and unnecessarily complex "version 2" as you are tempted to "make it better" by adding layers of abstractions and bells and whistles.

      • wcarss 50 minutes ago
        I agree with what you're saying about writing something twice or even three times to really understand it but I think you might have misunderstood the WET idea: as I understand it, it's meant in opposition to DRY, in the sense of "allow a second copy of the same code", and then when you need a third copy, start to consider introducing an abstraction, rather than religiously avoiding repeated code.
    • nixpulvis 2 hours ago
      I've been advocating for writing everything twice since college.
    • jimbokun 55 minutes ago
      That will still result in more abstraction than the average programmer.
  • johnfn 2 hours ago
    As dumb as it is to loudly proclaim you wrote 200k loc last week with an LLM, I don’t think it’s much better to look at the code someone else wrote with an LLM and go “hah! Look at how stupid it is!” You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.

    Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.

    • fao_ 2 hours ago
      Yeah! It's not like code quality matters in terms of negative value or lives lost, right?!

      https://en.wikipedia.org/wiki/Horizon_IT_scandal

      Furthermore,

      > As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.

      Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?

      • SvenL 1 hour ago
        I also struggle with this all the time, balance between bringing value/joy and level of craft. Most human written stuff might look really ugly or was written in a weird way but as long as it’s useful it’s ok.

        What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.

      • flir 48 minutes ago
        > a stowaway text editor

        ?!

        Was it hiding in one of the lifeboats?

      • 8note 2 hours ago
        > included multiple test harnesses (!)

        ive seen plenty of real code written by real people with multiple test harnesses and multiple mocking libraries.

        its still kinda irrelevant to whether the code does anything useful; only a descriptor of the funding model

        • flir 37 minutes ago
          If I'm reading this correctly ("a single homepage load of http://garryslist.org downloads 6.42 MB across 169 requests"), the test harnesses were being downloaded by end users. They weren't being installed as devDependencies.
      • lotsofpulp 1 hour ago
        The Horizon IT scandal was not caused by poor code quality, the scandal was the corrupt employees of the UK government/Post Office. Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
        • fao_ 48 minutes ago
          > Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.

          That's not quite correct.

          The root set of errors were made by the accounting software. The branch sets of errors were made by humans taking Horizon IT's word for it that there was no fault in the code, and instead blaming the workers for the differences in the balance sheets.

          If there were no errors in the accounting software (i.e. it had been properly designed and tested), then none of that would have happened.

          Nobody blames THERAC-25 on the human operator.

          • rcxdude 12 minutes ago
            It was worse than that. Higher ups in the post office knew the system was buggy and still doubled down on it. Yes, if the accounting software wasn't terrible the whole issue would not have happened, but there were so, so, many chances for the post office to do the right thing afterwards that it's not at all fair to blame the results on the poor quality software, which very notably did not prosecute thousands of people for fraud while telling each of them they were the only ones being flagged by the system.

            (THERAC-25 was a little more towards 'just bad software', but there were still systemic failures there as well).

    • sdevonoes 2 hours ago
      > Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.

      Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.

      He’s just shipping ads.

      • Terr_ 1 hour ago
        "Follow the money" was always relevant, but especially when it comes to any kind of LLM news or investment-du-jour.

        The cautionary/pessimist folks at least don't make money by taking the stance.

        • slyall 1 hour ago
          A few do.

          At the extreme end you'll get invited to conferences but further down you could have other products you are pushing. Even non-AI related that takes advantage of your "smart person" public persona.

    • tmoertel 2 hours ago
      > You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.

      But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.

    • alemwjsl 2 hours ago
      It isn't worth the time. I am not going to read the 200k LOC to prove it was a bad idea to generate that much code in a short time and ship it to production. It is on the vibe coder to prove it is. And if it is just tweets being exchanged, and I want to judge someone who is boasting about LOC and aiming to make more LOC/second. Yep I'll judge 'em. It is stupid.
    • ObscureScience 2 hours ago
      "Value generation" is a term I would be somewhat wary of.

      To me, in this context, it's similar to drive economic growth on fossil fuel.

      Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.

    • II2II 2 hours ago
      Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
    • roncesvalles 1 hour ago
      The main value he generated from that exercise was the screenshot. It's a kind of credentialism.
  • njarboe 2 hours ago
    German General Kurt von Hammerstein-Equord (a high-ranking army officer in the Reichswehr/Wehrmacht era):

    “I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined.

    Some are clever and diligent — their place is the General Staff.

    The next lot are stupid and lazy — they make up 90% of every army and are suited to routine duties.

    Anyone who is both clever and lazy is qualified for the highest leadership posts, because he possesses the intellectual clarity and the composure necessary for difficult decisions.

    One must beware of anyone who is both stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”

    • quantummagic 2 hours ago
      Where my fellow ninety-percenters at?
      • dijit 1 hour ago
        I think we put too much negative emphasis on people who aren’t as gifted intellectually.

        In reality, the world works because of human automotons, honest people doing honest work; living their life in hopefully a comforting, complete and wholesome way, quietly contributing their piece to society.

        There is no shame in this, yet we act as though there is.

        • xboxnolifes 1 hour ago
          This is what pains me with how many people respond negatively toward the idea of everyone being able to earn an honest living and raise a family. Too often the idea of "deserving it" comes into it as if doing your small part to contribute to society is not enough.
        • analog31 48 minutes ago
          I'm not blaming you here, but I think "automatons" may be inaccurate. A lot of the jobs that seem menial would be utterly bollixed if done by an automaton. The people continually handle the edge cases and tiny discrepancies between formal procedures and how things actually work. Consider the many stories of people experience AI bots when they try to get vendor support for products. "Please let me talk to a real person."

          Many of those people, probably including most bureaucrats, are working on systems that have already been automated to the fullest extent possible. This is one of the reasons why bureaucracies seem chaotic and inefficient -- the stuff that works is happening automatically and is invisible. You only see the exceptions.

          The automation can be improved, but it's a laborious process and fraught with the risks associated with the software crisis. You never know when a project is going to fall into the abyss and never emerge, and the best models of project failure are stochastic.

          • mauvehaus 20 minutes ago
            Anyone doubting this need only spend 15 minutes watching people using the self-checkout lines at the grocery store to see how good a good checkout person is...
        • Jtarii 1 hour ago
          The movie Perfect Days captures this perfectly.
        • ChosenEnd 1 hour ago
          Human automatons? Why would you have mercy for automatons? Just call them cattle, we might feel more compassion towards them if we don't think of them as machinelike.
          • lovich 48 minutes ago
            I don’t know why you’re being downvoted. Using that sort of terminology already shows you don’t care about them more than the sort of energy someone has saying they would never consider keying _their_ car.

            People don’t need to be exceptional to have intrinsic value.

      • wiseowise 1 hour ago
        I’m here man. Just want to make money and support my family. Couldn’t care less what some German general thinks about me. Even less care about online clowns trying to put people in buckets.
  • arthurjj 2 hours ago
    LLMs not being lazy enough definitely feels true. But it's unclear to me if it a permanent issue, one that will be fixed in the next model upgrade or just one your agent framework/CICD framework takes care of.

    e.g. Right now when using agents after I'm "done" with the feature and I commit I usually prompt "Check for any bugs or refactorings we should do" I could see a CICD step that says "Look at the last N commits and check if the code in them could be simplified or refactored to have a better abstraction"

    • layer8 1 hour ago
      It’s difficult to define a termination criterion for that. When you ask LLMs to find any X, they usually find something they claim qualifies as X.
      • arthurjj 1 hour ago
        Agreed. If I'm looking at what it proposes then about 1/2 the time I don't make the changes. If this were fully automated you would need an addendum like "Only make the change if it saves over 100 lines of code or removes 3 duplicate pieces of logic".

        There are other scenarios you would want to check for but you get the idea.

    • JeremyNT 1 hour ago
      I agree, it's not a fundamental characteristic but a limitation of how the tool is being used.

      If you just tell these things to add, they'll absolutely do that indiscriminately. You end up with these huge piles of slop.

      But if I tell an LLM backed harness to reduce LOC and DRY during the review phase, it will do that too.

      I think you're more likely to get the huge piles if you delegate a large task and don't review it (either yourself or with an agent).

  • pythontongue 15 minutes ago
    Similar issue as social media making communication more effortless, and thus encouraging higher quantity over quality
  • pityJuke 2 hours ago
    Man, I cannot imagine how nice it must to be to work with leadership like this, who just gets it.
  • jimbokun 50 minutes ago
    Time to teach the LLMs and the vibe coders one of the timeless lessons of software development:

    https://www.folklore.org/Negative_2000_Lines_Of_Code.html

  • xhrpost 1 hour ago
    I've had this exact sentiment in the past couple months after seeing a few PRs that were definitely the wrong solution to a problem. One was implementing it's own parsing functions to which well established solutions like JSON or others likely existed. I think any non-llm programmer could have thought this up but then immediately decide to look elsewhere, their human emotions would have hit and said "that's way too much (likely redundant) work, there must be a better way". But the LLM has no emotion, it isn't lazy and that can be a problem because it makes it a lot easier to do the wrong thing.
    • nulltrace 54 minutes ago
      It also doesn't bother checking what's already in your project. Grep around a bit and you'll find three `formatTimestamp` functions all doing almost the same thing.
  • singron 2 hours ago
    I have noticed LLMs have a propensity to create full single page web applications instead of simpler programs that just print results to the terminal.

    I've also struggled with getting LLMs to keep spec.md files succinct. They seem incapable of simplifing documents while doing another task (e.g. "update this doc with xyz and simply the surrounding content") and really need to be specifically tasked at simplifying/summarizing. If you want something human readable, you probably just need to write it yourself. Editing LLM output is so painful, and it also helps to keep yourself in the loop if you actually write and understand something.

  • spprashant 1 hour ago
    At this point, I almost feel bad that people are piling on Garry Tan. Almost.
  • glitchc 48 minutes ago
    Hard disagree with the initial assumption: Abstractions do not make a system simpler.

    Note: I would have added usually but I really do mean always.

    • love2read 33 minutes ago
      the thing about abstractions is that nothing implies that they aren’t leaky abstractions, which may be worse than no abstraction for future bug hunters
  • mplappert 2 hours ago
    I very much agree; I think laziness / friction is basically a critically important regularizer for what to build and for what to not build. LLMs remove that friction and it requires more discipline now. (Wrote some of this up a while ago here: https://matthiasplappert.com/blog/2026/laziness-in-the-age-o...)
  • progbits 2 hours ago
    Great article, I've been saying something similar (much less eloquently) at work for months and will reference this one next time it comes up.

    Quite often I see inexperienced engineers trying to ship the dumbest stuff. Back before LLM these would be projects that would take them days or weeks to research, write, test, and somewhere along the way they could come to the realization "hold on, this is dumb or not worth doing". Now they just send 10k line PR before lunch and pat themselves on the back.

  • warwickmcintosh 47 minutes ago
    laziness makes you understand the problem before writing anything. an LLM will happily generate 500 lines for something that needed 20 because it never has to maintain any of it.
  • flumpcakes 2 hours ago
    The more people boast about AI while delivering absolute garbage like in the example here, the more I feel happier toiling around in Nginx configurations and sysadmin busy work. Why worry about AI when it's the same old idiots using it as a crutch, like any new fad.
  • jwpapi 1 hour ago
    I‘m so happy about this article. I was forming a thought in my head the last couple of days, which is how to describe what it is that makes AI code practically unusable in good systems.

    And one of the reasons is the one described in this article and the other is, that you skip training your mental model when you don’t grind these laziness patterns. If you are not in the code grinding to your codebase, you don’t see the fundamental issues that block the next level nor you have the itch to name and abstract it properly so you wont have to worry about in the future, when somebody or you have to extend it.

    Knowing your shit is so powerful.

    I believe now that my competive advantage is grinding code, whilst others are accumulating slop.

  • fragmede 1 hour ago
    Since we all, stupidly, are leaning into LoC as a metric, because we can't handle subjectivity, at the very least, we could just do orders of magnitude for LoC. Was it a 10/100/1,000/10,000 LoC hour/week/day/month? 1,2,3,4 or 5. Dtrace's 60kLo, would then be a 5, Linux kernel is an8 (40M), Firefox is also an 8. Notepad++ is a 6,
  • gnerd00 3 hours ago
    oh this hits all the right notes for me! I am just the demographic that tried to perl my way into the earliest web server builds, and read those exact words carefully while looking at the very mixed quality, cryptic ascii line noise that is everyday perl. And as someone who had built multi-thousand line C++ systems already, the "virtues" by Larry Wall seemed spot on! and now to combine the hindsight with current LLM snotty Lord Fauntleroy action coming from San Francisco.. perfect!
  • jauntywundrkind 1 hour ago
    Abstractions and strong basis as a freedom to think freely at high levels.

    The slop drowning and impinging our ability to do good hammock driven development.

    Love it. Thanks Bryan.

    It's invaluable framing and we'll stayed. There's a pretty steady background dumb-beat of "do we still need frameworks/libraries" that shows up now. And how to talk to that is always hard. https://news.ycombinator.com/item?id=47711760

    To me, the separation of concerns & strong conceptual basis to work from seem like such valuable clarity. But these are also anchor points that can limit us too, and I hope we see faster stronger panning for good reusable architectures & platforms to hang out apps and systems upon. I hope we try a little harder than we have been, that there's more experimentation. Cause it sure felt like the bandwagon effect was keeping us in a couple local areas. I do think islands of stability to work from make all the sense, are almost always better than the drift/accumulation of letting the big ball of mud architecture accrue.

    Interesting times ahead. Amid so much illegibile miring slop, hopefully too some complementary new finding out too.

  • simianwords 2 hours ago
    This is a person clearly grieving that his hard earned knowledge in his field is now not that valuable.

    It is * exactly * the same as a person who spent years perfecting hand written HTML, just to face the wrath of React.

    • vsgherzi 1 hour ago
      Disregarding the fact that Bryan operates oxide a company that has multiple investors and customers (id say this proves valuable knowledge) the crazier fact is that people think html is useless knowledge.

      React USES html. Understanding html is core to understanding react. React does not in anyway devalue html in the same way that driving automatic devalues driving manual

      • simianwords 1 hour ago
        Go to Facebook.com and right click view source and tell me html is not being devalued. No person who wants to write aesthetic html would write that stuff.
        • vsgherzi 1 hour ago
          Do the same to Google.com

          When it matters it matters. Even in facebooks case they made react fit for their use case. You think the react devs didn’t understand html? Do you think quality frontends can be written without any understanding of html?

          Like the article says we’ve moved an abstraction up. That does not make the html knowledge useless

    • rakel_rakel 1 hour ago
      https://xkcd.com/1053/

      I recommend you go look at some of his talks on Youtube, his best five talks are probably all in my all time top-ten list!

    • g-b-r 1 hour ago
      Your account name is so fitting

      Now look up who he actually is.

    • lapcat 2 hours ago
      > This is a person clearly grieving that his hard earned knowledge in his field is now not that valuable.

      He's co-founder and CTO of his own company, so I think he's doing fine in his field.

      • simianwords 2 hours ago
        It doesn't change the fact that much of what (I think) he prides in himself in is getting commoditised.
        • wiseowise 1 hour ago
          LLMs dissolved your brain if you think they commoditize what a guy like this[0] prides in himself.

          https://bcantrill.dtrace.org/about/

        • 0xBA5ED 40 minutes ago
          I would seriously consider if you've developed an imaginary caricature in your mind that you apply to people you don't know. Further, I would consider if any living person actually lives up to it.
        • pxc 59 minutes ago
          What he prides himself in (in this context) is craft, which LLM use probably can enable, but definitely isn't commoditized by the kind of vibe coding that Garry Tan is doing.