With all the emphasis on the speed of modern AI tools, we often seem to forget that velocity is a vector quantity. Increased speed only gets us where we want to be sooner if we are also heading in the right direction. If we’re far enough off course, increasing speed becomes counterproductive and it ends up taking longer to get where we want to be.
I’ve been noticing that this simple reality explains almost all of both the good and the bad that I hear about LLM-based coding tools. Using AI for research or to spin up a quick demo or prototype is using it to help plot a course. A lot of the multi-stage agentic workflows also come down to creating guard rails before doing the main implementation so the AI can’t get too far off track. Most of the success stories I hear seem to be in these areas so far. Meanwhile, probably the most common criticism I see is that an AI that is simply given a prompt to implement some new feature or bug fix for an existing system often misunderstands or makes bad assumptions and ends up repeatedly running into dead ends. It moves fast but without knowing which direction to move in.
> Increased speed only gets us where we want to be sooner if we are also heading in the right direction.
This is a real problem when the "direction" == "good feedback" from a customer standpoint.
Before we had a product person for every ~20 people generating code and now we're all product people, the machines are writing the code (not all of it, but enough of it that I will -1 a ~4000 line PR and ask someone to start over, instead of digging out of the hole in the same PR).
Feedback takes time on the system by real users to come back to the product team.
You need a PID like smoothing curve over your feature changes.
Like you said, Speed isn't velocity.
Specifically if you have a decent experiment framework to keep this disclosure progressive in the customer base, going the wrong direction isn't a huge penalty as it used to be.
I liked the PostHog newsletter about the "Hidden dangers of shipping fast", I can't find a good direct link to it.
Don't wait for feedback from "real users", become a user!
This tayloristic idea (which has now reincarnated in "design thinking") that you can observe someone doing a job and then decide better than them what they need is ridiculous and should die.
Good products are built by the people who use the thing themselves. Doesn't mean though that choosing good features (product design and engineering) isn't a skill in itself.
Have been there, we got pushback from users and we had to back off with releases. Users hunted product owner with pitchforks and torches.
As dev team we were able to crank the speed even more and silly product people thought they are doing something good by demanding even more from us. But that was one of the instances where users were helpful :).
People use dozens of apps every day to do their work. Just think about how are you going to make time to give feedback to each of each.
> Just think about how are you going to make time to give feedback to each of each.
That's pretty much solved by the size of the audiences. You won't give feedback on 12 apps, but 11 other people will probably do so on 11 different apps.
Of course, the issue with my domain is that there's plenty of feedback, and product owners just dismiss it. Burn down your entire portfolio to get that boosted shareholder value for the next earnings report.
And how do you solve that when you are one of those 11 apps when no one wants to talk to you because they have their work to do? Where you don’t have power to say that kind of thing.
Well by asking repeatedly of course but you just piss people off.
Have you ever given feedback to Atlassian, Google, Microsoft?
>Increased speed only gets us where we want to be sooner if we are also heading in the right direction.
I suppose there is an argument that if you are building the wrong thing, build it fast so that you can find out more quickly that you built the wrong thing, allowing you to iterate more quickly.
I think “iterating more quickly” is good for the company doing the building. But if you’re the customer, having a new piece of shit foisted on you twice a day so that some garbage PM can “build user empathy” gets old really fast.
Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
> Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
And frankly, they were in point.
Especially in the B2B context stability is massively underrated by the product.
There is very little I hate more then starting my work week on a Monday morning and find out someone changed the tools I'm using for daily business again
Even if it's objectively minor like apples last pivot to the windows vista design... It just annoys me.
But I'm not the person paying the bills for the tools I'm using at work, and the person that is almost never actually uses the tools themselves and hence shiny redesigns and pointless features galore
Yes, but only if you have an ax to sharpen. With a lot of things it takes trial and error to make progress. You can take this pretty up high too - sometimes it takes building multiple products or companies to get it right
> With a lot of things it takes trial and error to make progress
Way too often that is used as an excuse for various forms of laziness; to not think about the things you can already know. And that lack of thinking repeats in an endless cycle when, after your trial and error, you don't use what you learned because "let's look forward not backward", "let's fail fast and often" and similar platitudes.
Catchy slogans and heartfelt desires are great but you gotta put the brains in it too.
Without commenting about the frequency of negligence myself, I suspect at least that you and GP are in agreement.
I doubt GP is suggesting ‘go ahead and be negligent to feedback and guardrails that let you course correct early.’
Plugging the Cynefin framework as a useful technique for practitioners here. It doesn’t have to be hard to choose whether or not rigorous planning is appropriate for the task at hand, versus probe-test-backtrack with tight iteration loops.
> I suppose there is an argument that if you are building the wrong thing, build it fast so that you can find out more quickly that you built the wrong thing,
A lot of people are so enamored by speed, they are not even taking the time to carefully consider the full picture of what they are building. Take the HN frontpage story on OpenCode: IIRC, a maintainer admitted they keep adding many shallow features that are brittle.
Speed cannot replace product vision and discipline.
Tech very quickly shifted to a industry of marketers instead of hackers. And with salesmen, you want to advertise as many features as possible, not talk about how quality one good crucial feature is.
This won't really stop until investors start judging on quality and not quantity. But a lot of those are thinking in finances, and the thought of removing their biggest cost center is too tempting to not go all in on. So they want to hear "we made this super fast with 2-3 people!" instead of "we optimized and scaled this up to handle 400% more workload with double the performance".
The outcome of that approach depends entirely on the broader process. Imagine golf but you refuse to swing with anything less than maximum strength to avoid wasting time.
Discovery is great and all but if what you discover is that you didn't aim well to begin with that's not all that useful.
Exactly this. Velocity is a vector. It has magnitude (aka speed) and direction.
Our industry has chased magnitude over all else for so long. Now we can put nitro in everyone's car and we get to where we wish to go very fast. Suddenly bad direction-setting is getting feedback where there used to be friction and natural time to steer.
My greatest hope is that a ton of bad leaders and middle managers end up finally getting exposed due to the advent of AI. (Will I be disappointed? Almost certainly yes.)
> If we’re far enough off course, increasing speed becomes counterproductive and it ends up taking longer to get where we want to be.
This reminded me of the idea that civilization is already a misaligned superintelligence, and that technology (incl. AI) just moves it faster in the wrong direction.
That's basically the problem of supermorality. If you're an actually benevolent AI, do you do what civilization tells you? Or do you do what is good? What happens if you disagree?
I think it depends on what is good, and who it's good for.
Thus far, AI has been good... For venture capitalists. Jury's out if it's good for humanity and civilization at large. There have been a lot of benevolent usages of AI thus far, but also a lot of bad.
As for those who disagree with the "benevolent AI," I think they just get sent to the gallows (either metaphorically or literally)
I've been working on a side project for ~10 years (very intermittently) that involves a tricky combination of mathematics, classical AI algorithms, and programming language design, and I've gone though this very slow but rewarding journey to work out how all of the pieces should fit together properly.
In the last year or so I've been able to prototype it and accelerate the development quite significantly using Claude and pals, and now it is very close to a finished product. One one hand there's no doubt in my mind that the LLM tools can make this sort of thing faster and let you churn through ideas until you find the right ones, but on the other hand, if I hadn't had that slow burn of mostly just thinking about it conceptually for 10 years, I would have ended up vibe coding a much worse product.
10 years of thinking before shipping is actually the move. The AI just becomes a power tool — useless if you don't know what you're building, unstoppable if you do
The biggest problem is the fact they DON'T clarify their stupid assumptions.
The number of times I've seen them get the wrong end of the stick in their COT is ridiculous.
Even when I tell them to only implement after my explicit approval they ignore this after 2 or 3 followups and then it's back to them going down blind alleys.
I've definitely gotten it into contexts where it will never stop going into the wrong direction, even when I tell it to forget everything it did before, and told it a correct path forward. Usually restarting the entire session fixes it, but not always.
Ah, metaphors. Abstract concepts are not moving objects. You don't actually need to "turn it around" or "sail past it". You can break the laws of physics (because they don't apply). You can teleport around.
Speed actually just wins, because we are usually constrained by time.
1) a lot of shallow, orthogonal directions is better than 1 deep, careful approach
2) There's no social aspect to churning out a bunch of slop that will affect the perception of potential "right things" later. My domain can be particularly grudgeful in this regard.
1) If there is uncertainty, that seems to be correct, yes. (If there is no uncertainty, then the question and the essay become moot: You already know what to do. Things take as long as they must. Worst case, you are wrong.)
2) I read that part twice and could not figure out what it is you are trying to say.
I find myself sympathetic to the author's PoV, but I am incorporating LLMs into my workflow, with a resultant jaw-dropping (to me) increase in velocity.
But I am not just dispatching to agents. I work interactively with a chat interface, and sometimes, I will just bin a whole hour's worth of back-and-forth, because we're not getting anywhere (in fact, I did exactly that, about 30 minutes ago).
But that hour is peanuts, compared to the ten hours that I would have spent, trying to figure it out on my own. With an LLM (and git), I can "run something up the flagpole, and see who salutes." I can afford to experiment with very large code bases, and toss out a whole bunch of stuff, if need be.
That said, I know damn well, that quite a few folks here, would sneer at my methodology, as "awkward, stodgy, and slow." Nevertheless, I am pretty chuffed with the results. Yeah, it's slower than some folks would do it, but the Quality is really high, and I'm happy with the results.
My favorite thing to do, is (for example) toss all 5 of my SDK files into the LLM, paste in the JSON server interaction, describe the bug, and ask it to help me figure it out.
Nine times out of ten, it finds the bug quickly. The real bug. I am not always happy with the proposed solutions, but finding the root cause is always the time-consuming part.
One more thing I try is give same prompt once in a while to ChatGPT/Gemini/Grok. Than take the 2 out of 3 ideas forward.
All leading AI seems to have some blind spot. Like some kind of intricsic character. 1 will completely overlook some particular bug while finding other excellent bugs and edge cases.
Getting the code through all 3 before commiting has shown excellent results to me.
> the Quality is really high, and I'm happy with the results.
>toss all 5 of my SDK files into the LLM, paste in the JSON server interaction, describe the bug, and ask it to help me figure it out.
I wonder if this "quality" code wouldn't have that many bugs to dive into if it was more carefully considered and produced up front?
This harkens back to a study in 2024 were senior devs were actually less productive with LLMS but they felt more productive, even after being told they were less productive.
> were senior devs were actually less productive with LLMS but they felt more productive
I'm certain this is true for me. The only thing LLMs do get me is the ability to make forward progress on tasks while I'm in meetings, which is a net positive at least.
Awesome article, I feel a lot of people have also forgotten that good projects take iteration not 100 new features.
To get few features to an excelent state it requires multilpe iterations at multiple stages.
1) The developer who does a task validates that their thinking was the correct one, they see how they changes impact the system, is it scalable? Does it need to be scalable? While you are working and thinking on it you get more and more context which simply wasn't there at the begining.
2) A feature done once (even after my perfect ClaudeCode plan) is not done forever, people will want to make it better/faster/smoother/etc. But instead of taking the time to analyze and perfect it we go onto the next feature, and if we have to iterate on the current one, we don't iterate we redo...
Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
I've been working on a clone of Sid Meier's Pirates but with a princess theme (for my daughters).
I've been using AI to help me write it and I've come to a couple conclusions:
- AI can make working PoCs incredibly quickly
- It can even help me think of story lines, decision paths etc
- Given that, there is still a TON of decisions to be made e.g. what artwork to use, what makes sense from a story perspective
- Playtesting alone + iterating still occurs at human speed b/c if humans are the intended audience, getting their opinions takes human time, not computer time
I've started using this example more and more as it highlights that, yes, AI can save huge amounts of time. However, as we learned from the Theory of Constraints, there is always another bottleneck somewhere that will slow things down.
I've tried a few game projects with coding agents - having never worked on a game before in my life - and the main thing I learned is that the hard part is designing it to be fun.
Coming up with a genuinely interesting gameplay loop with increasing difficulty levels and progressively revealed gameplay mechanics is a fascinating and extremely difficult challenge, no matter how much AI you throw at the problem.
And that's why human work, not its "expression" or some other legalese should be protected by law.
If LLMs (or other "AI" or even AI tools) are able to exactly replicate the behavior of a program (game or otherwise) without access to its source code, that's technologically cool. However, that means it's possible to cheaply replicate immense amounts of human work in a way the law does not cover.
If you take a game and use LLMs to reimplement both its assets and code from scratch but players have the same movement, weapons do the same damage, have the same spread and projectile speed, and so on, then the "new" game is not really new, it's based on other people's work. And nobody should be allowed to profit from other people's work without their consent and without compensating them.
Obviously, work is hard to quantify but that doesn't mean we should give up.
1) Yes but in those cases what their authors are gaining is at best some public recognition, not money. And because the projects don't hide what they're based on, that recognition goes back to the original games and their authors. Now, if they were asking for donations, then yes, I think they should give a part of it to the original devs.
2) We can also look at it from a more utilitarian perspective. When something starts as closed source, people who made it got paid already and the owners (who often did not perform any useful work except putting in money) keep making money from then on. Reimplementing it as open source does not harm the original devs but allows more people to access it and it also often leads to a much more open and pro-social implementation without dark patterns. And the paid version often still has an advantage due to existing awareness, marketing and network effects.
OTOH when something starts as free/open under conditions such as anyone building on top of it has to release under the same conditions, then a company taking that work is violating explicitly stated wishes, is making money which doesn't reach the original devs and does not promote the original work. And it also has the aforementioned advantages. When the closed version eclipses the open one, the owners are free to add dark patterns and otherwise exploit their position further.
This way open work is a global social good, closed work is only good for those who own it.
---
I prefer argument 1 because it doesn't require the presence of exploitative power structures.
Either way, we should recognize there are multiple dimensions to compensation - here recognition and money. And work should be rewarded along both axes transitively.
I have very similar experience. I vibecoded a foreign language practice app for myself. It works decent from functional perspective and I don’t see too many bugs. But the biggest productivity constraint I see is the time I need to spend using it in order to understand what is working and where the issues are.
„I was able to vibecode those 5 apps I always wanted but never had time to code them myself … it is so different now because — I don’t have time to use them”.
One of my favorite ideas from Nietzsche [1] is that civilizations take millennia to “digest” or integrate concepts. It seems a little obvious, maybe, until you look at the modern world and realize the baseline assumption is something like, “every problem is just a question of resources.”
An example being the common attitude that [advanced tech] is just a math problem to be solved, and not a process that needs to play itself out in the real world, interacting with it and learning, then integrating those lessons over time.
Another way to put this is: experience is undervalued, and knowledge is overvalued. Probably because experience isn’t fungible and therefore cannot be quantified as easily by market systems.
1. Probably not his original idea, and now that I think about it this is kind of more Hegelian. I’m not familiar enough with Hegel to reference him though.
I have no problem with people treating advanced tech like a math problem. I have a big goddamned problem with the tech world seeing things like creativity, expression, exploration, imagination, experience, companionship, empathy, sex, fun, beauty, inspiration, and all of that human-y sort of stuff as a goddamned math problem to be solved. It’s just so sad and most people resent it being shoved down their throats by tech companies abusing their societal leverage.
> I have a big goddamned problem with the tech world seeing things like creativity, expression, exploration, imagination, experience, companionship, empathy, sex, fun, beauty, inspiration, and all of that human-y sort of stuff as a goddamned math problem to be solved.
Two sides of the same coin in the tech industry: engineering and commerce.
There’s a reason investors call non-technical executives taking over engineer-founded companies adult supervision, and it’s not to make sure the engineers eat their vegetables. Developers love to imagine themselves as sort of semi-for-profit-pseudo-academics with their conferences and white papers and FOSS projects, but where the rubber hits the road, we know who tells who what to do. It’s not the 90s anymore.
No. Many great scientists worked for decades to discover atoms, and they were controversial at the time. Now you read about them in primary school books.
> everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things
You fill a jar with sand and there is no space for big rocks.
But if you fill the jar with big rocks, there is plenty of space for sand. Remove one of the rocks and the sand instantly fills that void.
I think that's kind of the point though: AI is the sand, but it's the rocks that hold all of the value; the further you get away from using AI the more real value you obtain. Like, a few of the rocks have gold deposits in them, and the sand is just infinitely copious but never holds anything valuable. And you've got a bunch of people running around saying, "Behold my mountains of sand!"
You fill the bottle with water, you put a fish in it, you remove half of the water, the bottle is still half full, but if you remove the fish, it will have less water than before.
You fill the bottle with half of the water, you put the fish in, you can fill in the other half. If you start with the first half, you will end up with more water.
You write a metaphore in a comment, you remove half of it, you add another one in the middle, you add the half of the first one, and… nobody understands anything.
Is it the ultimate result of LLM use? People internalising the idea that writing is about stringing words together like a Markov chain without realising they're not saying anything of substance?
In a more advanced civilisation, you would be put in the pillory for the townsfolk to throw rotten cabbage at you until the Lord fixed whatever made you say that.
The point of the metaphor is not to say "spending time is mechanically similar to putting things in a container". It is to look at spending time from a new angle, and see if it helps you understand it better. A wise person sees a metaphor as a launching point for thought, not as an expression of a metaphysical connection.
Yes, there are bad metaphors, and people who take metaphors too seriously. That you can conjure a bad metaphor with somewhat similar to semantics to some other metaphor does not mean that said metaphor is bad.
I'm so glad that style of inerview was dying out right when I graduated. And I love puzzles. But I don't need wannabe IQ tests for a job that expects me to work in legacy code and coordinae with other engineers.
Hahah, I just have to reply and say I loved the original comment and was happy for the laugh. Obviously this is the answer to the riddle of
> Given a 3-liter container and a 5-liter container, both initially empty, and access to tap water, how can you measure exactly 4 liters of water without using any additional containers
I've offered and received some convoluted metaphors recently, love leaning hard into this one.
They're talking about Archimedes' principle, displacement of water. The fish makes the water bottle overflow, so be careful when you add the fish so that it doesn't. It's a counter analogy to the rocks one above.
They’re pointing out that if the jar was _filled_ with sand, then of course you can’t fit any rocks in because it’s full. It’s cute but misunderstands the original metaphor I think.
> We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them
Lost me in paragraph three. We pay for those things because they're recognizable status symbols, not because they took a long time to make. It took my grandmother a long time to knit the sweater I'm wearing, but its market value is probably close to zero.
I would say that wearing a sweater knitted by one's grandmother is its own kind of status symbol. I'm more impressed by that (someone having a grandmother willing to invest that much effort in a gift for them) than someone spending $1000 on an item of clothing.
The fact that those items took a long time to make is part of what makes them status symbols though, because if you pay a lot of money for something that took no time to make at all (see most NFTs) you look like an idiot to a lot of people.
This sort of thing was done at a time when everybody did it, and now that it's not done, nobody does it
No kid ever said "did you see the sweater that Timmy's grandma knitted for him? That kid is so cool! "
Mostly because they all had grams sweaters as well.
I don't know what term you were looking for, but a handmade present for someone dear is about the furthest thing from a "status symbol" that I can think of:
- it can't be bought
- it can't be transferred without losing almost all value (ie: it's only valuable to you, or at most your family, eBay doesn't want it)
- it provides no improvement whatsoever in one's social standing
What are you referring to with the phrase "status symbol"?
I can't connect it at all to your listed points. An Olympic medal is about obvious a status symbol as I can imagine but it can't (meaningfully) be bought or transferred.
The status signified with a knit sweater is membership (and good standing!) in a caring family with elders not yet fully subsumed into their phones.
People, acquaintances and strangers alike, frequently comment on the knit socks I often wear, ask after who made them, and all of a sudden we're on "how's your mom" terms.
Status symbols signal different status in different contexts. Some contexts (mostly lower middle class and below) are impressed by Rolex watches because they are expensive and the struggle for money forms a collective experience.
The old rich doesn't give a shit about Rolex watches beyond noticing the newb rich using them to tell on themselves.
To be worthy that much time is the statussymbol of love. Its a rare thing, money can't buy. Somebody gifts part of his finite time on the planet to you bundled in an artifact.
I like the sweater, and some people like you might recognize it as special, but it doesn't have the universal cachet of a Rolex or something. It's also a bit chunky and funny-looking (but I guess so are some Rolexes).
Maybe their point is that the brands themselves have a lot of time embedded in them. Generally, status symbols (whatever they are) aren't things that are recently established.
I feel you, I guess i succeeded in not being lost and keep reading by solving the conendrum in telling myself: it certainly should take time to grow the cows for the bags. Nonetheless I'm glad i finished reading it, it was a good essay.
I agree there, but there are plenty of examples of time cost being baked into an item, regardless of status symbols.
The sweater is with whatever value a single person values it as or would pay for it. Said another way, would you sell it to me for $10? 50? 100? If you said no to all three, it's worth at least $100.
The point of the essay is good. You called out exactly my reaction; we value those things because of the marketing dollars that went in to them. As a wealthy friend from Geneva said to me once, “Look around this dinner party - the Swiss here have either an Apple watch or nothing on their wrist.” Swiss watches are an export good, and Hermes is a luxury brand. Both of generally good quality. And much, much better marketing.
Not really. Some items naturally have value due to utility. Natural resources only lose their value if we somehow move on from all its utilities (like coal is, day by day).
Some are indeed via marketing, but any itema have intrinsic or at least, emotional value.
Yes, Veblen goods, and there are examples of cloning Hermès bags for example (still by hand) where they're much cheaper yet took the same amount of time to create.
It would be more accurate to say "we value these things highly". Most people don't give a damn about your sweater, but it's probably extremely valuable to you precisely because of the time your grandmother put into it.
I've been hearing similar things from a lot of different directions. The underlying issue about "you cannot replace time" is one that is good to internalize early. A number of people I know who "missed" their kids growing up because they were working hard to make lots of money. You can't go buy "time with my kids when they were growing up."
Agentic coding very much feels like a "video game" in the sense of you pull the lever and open the loot box and sometimes it's an epic +10 agility sword and sometimes its just grey vendor trash. Whether or not it generates "good" or even "usable" code fades to the background as the thrill of "I just asked for a UI to orchestrate micro services and BLAMMO there it was!" moves to the fore.
Sounds familiar, for most of my life I have tried to remove all "friction" from life – applying that engineering mindset to make everything as efficient as possible. Only then I realized that life somehow is about that "friction".
All of our current systems are currently designed keeping in mind the restrictions of costs and human working speed. What if we remove those friction using software, The remaning friction will be all due to physical reality. Then humanity's focus will shift in removing friction in physical domain. Currently it takes some time for a support person to triage a bug request, what if that flow takes seconds now.
Think about the analogy of transaction speed of money transfer vs actual delivery of good. With AI, we would make all digital tasks instantaneous, but the physical world will hum along at its own speed unless we speed it up with dark factories and what not.
Now that everyones running faster than ever and trying to outrun the competition by slapping more code on than they do you can only brace for the results.
I expect these tools will quickly let people to ramp up several orders of magnitude of more complexity and lines of code to any software project.
The your 100kloc JS electron app will become a 10m loc JS electron app running on a 500m loc browser runtime.
Repeat this across the stack for every software component and application and library. If you think things are bloated now just wait a few years and your notepad will be a 1m line behemoth with runtime performance of a glacier.
But how do you make the case for thoughtful less bloated software to people who just value writing less code themselves, even if the output produces more lines of code? Seems to me like people don’t care about LOC, they care about how much effort they have to spend writing the lines.
Speed is useful, when you have a good idea or a hypothesis you want to test. But if you are running in the wrong direction, speed is of very little value. With LLMs it might be even harder to stop and realize that you are creating the wrong thing, because you are not spending effort to create the wrong thing.
I'm seeing this cultural pattern where developers have started accepting LLM output with very little scrutiny. This ends up code that works on the surface, but most of the times problems are not addressed at their source.
Creating these wrong things is only cheaper with LLMs. Since developers now spend less time and effort to create that wrong thing, they don't feel the need validate or reflect on them so much.
The risk is not the tool itself, but the over-reliance on it and forgoing feedback loops that have made teams stronger, e.g. debugging, testing, and reasoning why something works a particular way.
> But if you are running in the wrong direction, speed is of very little value.
I think of it differently. Speed is great because it means you can change direction very easily, and being wrong isn't as costly. As long as you're tracking where you're going, if you end up in the wrong place, but you got there quickly and noticed it, you can quickly move in a different direction to get to the right place.
Sometimes we take time mostly because it's expensive to be wrong. If being wrong doesn't cost anything, going fast and being wrong a lot may actually be better as it lets you explore lots of options. For this strategy to work, however, you need good judgment to recognize when you've reached a wrong position.
The point about all the extra efficiency just making us more stressed and busy resonates deeply with me. It also stood out to me in the recent episode of the No Priors podcast with Andrej Karpathy. It's that feeling of falling behind is you're not using all available compute.
It really is but it's come at the cost of actually being useful. It has a vague 'about' modal and that's your lot, which is confusing since they're encouraging people to join. I'm just not sure anybody's going to know what.
I'm more concerned about the "falling behind" we're doing chewing up compute generating, and then running, LLM slop. That is a lot of energy used and heat generated for a less-than-optimal payoff.
The website of earendil definitely still takes time, as it is right now only a gray area with white on gray labels in the corners, not displaying anything in the content area. The labels don't work like links either. The background image doesn't load, until one clicks some subitem of the "about" label and then "closes" the content that is shown. The theme toggle (?) at the bottom right does nothing, website stays gray, no content shown.
This website does not resonate with the message I got earlier from the article. It does not give the impression of someone taking appropriate time to make it.
Yes, you cannot build years of community and trust in a weekend. But sometimes it's totally sufficient to plant a seed, give it some small amounts of water and leave it on its own to grow. Go ask my father having to deal with a huge maple tree, that I’ve planted 30 years ago and never cared for it.
Open Source projects sometimes work like this. I've created a .NET library for Firebase Messaging in a weekend a few years ago… and it grew on its own with PRs flowing in. So if your weekend project generates enough interest and continues to grow a community without you, what’s the bad thing here? I don’t get it.
Sometimes a tree dies and an Open Source project wasn’t able to make it.
That said, I’ve just finished rewriting four libraries to fix long standing issues, that I haven’t been able to fix for the past 10 years.
It's been great to use Gemini as a sparring partner to fix the API surface of these libraries, that had been problematic for the past 10 years. I was so quick to validate and invalidate ideas.
Once being one of the biggest LLM haters I have to say, that I immensely enjoy it right now.
You didn't so much "leave it on its own" as much as outsource the duty to nature. Turns out nature spent eras optimizing for tending to trees.
Can't really say the same for vibecoding. You still need to do a lot of work that's ultiamtely putting lipstick on a pig. Maybe someone talented can make it pretty, but it has a quality ceiling, and most won't get anywhere close to that; people will just see a pig with lipstick on it.
I've been building my project[0] for 14+ years, but I'm still looking forward to the day I won't have to worry about next month's rent. Customers are happy though. Last week I got messages from two customers that purchased more than 7 years ago, because they wanted to install/use the product again. It was fun to revisit our email conversations and to see how much the product has evolved since. What's funny is that the feeling is the same after 1, 5 or 15 years. There's always stuff to optimize and improve. Unless it's a standalone experience (e.g. game or a movie), most software and tools do need constant updates and improvents to keep up-to-date with the current world.
What's funny, is that I had many other projects attempted over the years, but many rise and die quickly, yet the one that lasted the longest is also the one that is likely to last longest from now.
Went indie after a long run at a big company and this hits. The hardest thing wasn't the work, it was getting comfortable with the silence between shipping and seeing any signal. Still working on it tbh.
I work at FAANG, and leadership is successfully pushing the urge for speed by stablishing the new productivity expectations, and everyone is rushing as much as they can, as the productivity gain doesn't really match the expectations, and people overwork to make up for this difference. This works very well with internal competition and a quota system for performance ratings, with some extra fear due to the bad job market.
I feel this new world sucks. We have new technology that boosts the productivity of the individual engineer, and we could be doing MUCH better work, instead of just rushed slop to meet quotas.
I feel I'm just building my replacement, to bring the next level of profits to the c-suite. I just wish I wasn't burning out while doing so.
I’ve noticed this dynamic acutely working at YC startups the last 5ish years. Coding has become like a sweatshop.
I don’t think it’s exclusive to startups or tech either, it seems more like a downstream consequence of the fact that there’s no real innovation anymore. Capitalism demands constant growth, and when there are real technological improvements you can achieve that growth through higher productivity. If there are none, you have to achieve that growth through other means like forcing employees to work longer or cutting costs. The alpha is all coming from squeezing the labor force right now.
> it seems more like a downstream consequence of the fact that there’s no real innovation anymore
This doesn't sound right to me. We are currently getting smacked upside the head by an enormous technological innovation. I believe that, even within the framework of capitalism, this problem has social and political roots. The "robber baron" period late 19th century America has strong similarities to what we are seeing today, and technological stagnation was not the cause.
When are we throwing anti-trust at the robber barons? That's the real question.
And as of now, we are not having "technological innovation". We found a new jackhammer and are tearing up the entire house experimenting with it. Maybe when the "shiny new thing" effect wears off we'll get true innovation. But as of now people are just getting paid to show off jackhammers.
My current project is the culmination 15 years of software development.
I started out building a full stack framework like Meteor framework (though I started before Meteor framework was created in 2012 and long before Next.js).
Then I ported it to to Node.js because I saw an advantage to having the same language on the frontend and backend.
Then I noticed that developers like to mix and match different libraries/modules and this was a necessity. The whole idea of a cohesive full stack framework didn't make sense for most software. So I extracted the most essential part of it that people liked and this became SocketCluster. It got a fair amount of traction in the early days.
At the time, some people might have thought SocketCluster was trying to be a more scalable copycat of Socket.io but actually I had been working on it for several years by that point. I just made the API similar when I extracted it for better compatibility with Socket.io but it had some additional features.
A few years ago, I ended up building a serverless low-code/no-code CRUD platform which removes the need for a custom backend and it can be used with LLMs directly (you can give them the API key to access the control panel). It can define the whole data schema for you. I've built some complex apps with it to fully prove the concept with advanced search functionality (including indexing with a million records).
I've made some technical decisions which will look insane to most developers but are crucial and based on 15 years of experience, carefully evaluating tradeoffs and actual testing with complex applications. For example my platform only has 3 data types. String, Number and Boolean. The string type supports some additional constraints to allow it to be used to store any kind of data like lists, binary files (as base64)... Having just 3 types greatly simplifies spam prevention and schema validation. Makes it much easier for the user (or LLM) to reason about and produce a working, stable, bug-free solution.
That said I've been struggling to sell it because there are some popular well funded solutions on the market which look superficially similar or better. Of course they can't handle all the scenarios, they're more complex, less secure, don't scale, require far more LLM tokens, lead to constant regressions when used with AI. It's just impossible to communicate those benefits to people because they will value a one-shotted pretty UI over all these other aspects.
Saasufy itself isn't open source. I'm planning to sell licenses of the code (a limited number of them to make it scarce). SocketCluster is a core component of Saasufy. The goal did evolve slightly; originally, it was to make it easier to build full stack applications. Now it actually lets you build entire full stack apps without code. That bigger goal has been achieved. I have some videos linked from the Docs page showing how it works.
But yes, I'm a bit paranoid about my situation. I do feel like my work is suppressed by algorithms. Things feel very different for me now than they did before in terms of finding users. It's really hard to find people to try my work. Difficult even to convince them to watch a 10 minute video. Though I guess many people are in the same boat right?
That makes more sense but I don't see what pub/sub has to do with a no-code full-stack framework. Other than that some of them might want a chat widget?
What's faster now are the time-dependent factors of production - product development, go-to market, etc.
What's slower now are threats to production - even minor regulations take years or decades, and often appear only when workarounds have surfaced.
So what changed in the last 40+ years are the many tools for businesses to shape the conditions of their business -the downstream market, upstream suppliers, and regulatory support/constraints. This is extremely patient work over generations of players, sometimes by individuals, but usually by coalitions of mutual corporate self-interest, where even the largest players couldn't refuse to participate.
I see that at play in economics and geopolitics. While the West wants results in weeks, months or, anyway, in less than one term, China being inspired by Confucius and Sun Tzu is calmly waiting and slowly building.
> everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things
I do wonder if productivity with AI coding has really gone up, or if it just gives the illusion of that, and we take on more projects and burn ourselves out?
> I do wonder if productivity with AI coding has really gone up
Here's the thing: we never had a remotely sane way to measure productivity of a software engineer for reasons that we all understand, and we don't have it now.
Even if we had it, it's not the sort of thing that management would even use: they decide how productive you are based on completely unrelated criteria, like willingness to work long hours and keeping your mouth shut when you disagree.
If you ask those types whether productivity has gone up with AI, they'll probably say something like "of course, we were able to let go a third of our programmers and nothing really seems to have changed"
"Productivity" became a poisoned word the moment that the suits realized what a useful weapon it was, and that it was impossible to challenge.
>"Productivity" became a poisoned word the moment that the suits realized what a useful weapon it was, and that it was impossible to challenge.
Not impossible to challenge. But most people don't have the legal funds to do so. Those that do tend to get a cushy severance bribe to stay quiet and they move on elsewhere.
That's also why it's a long process to "fire" someone but easy to "lay off" instead. layoffs are never about productivity (so it doesn't matter anyway), and the US is doing absolutely nothing to protect against it like most of the world.
What society and America is about to realize is that it really doesn’t matter how productive you are at software and technological innovations when systemic things outside of the economic system are eroding.
It doesn’t matter how fast we can make our widgets and chatbots when what you need is to have a self sufficient workforce. We have outsourced everything material and valuable for society. Now we are left with industries of gambling, ad machines and pharmaceuticals with a government that is functionally bankrupt and politicians that have completely sold out
> I do wonder if productivity with AI coding has really gone up, or if it just gives the illusion of that, and we take on more projects and burn ourselves out?
It definitely hasn't for me. I spent about an hour today trying to use AI to write something fairly simple and I'm still no further forward.
I don't understand what problem AI is supposed to solve in software development.
> I don't understand what problem AI is supposed to solve in software development.
When Russians invaded Germany during WWII, some of them (who had never seen a toilet) thought that toilets were advanced potato washing machines, and were rightfully pissed when their potatoes were flushed away and didn't come back.
Sounds like you're feeling a similar frustration with your problem.
At some point hearing "you're holding it wrong" and "here's a metaphor for why you're dumb" in response to real shortcomings with AI, and the manic hype behind it, becomes repetitive and feels like there really aren't good arguments or evidence against those shortcomings and hype.
Well following advice from folk on here earlier, I thought I'd start small and try to get it to write some code in Go that would listen on a network socket, wait for a packet with a bunch of messages (in a known format) come in, and split those messages out from the packet.
I ended up having to type hundreds of lines of description to get thousands of lines of code that doesn't actually work, when the one I wrote myself is about two dozen lines of code and works perfectly.
It just seems such a slow and inefficient way to work.
tbh that's not a helpful thing to say. I think a more productive thing would be to ask "What model are you using?" "Are you using it in chat mode or as a dedicated agent?" "Do you have an AGENTS.md or CLAUDE.md?"
I've also been underwhelmed with its ability to iterate, as it tends to pile on hacks. So another useful question is "did you try having it write again with what you/it learned?"
Agreed was a bit rough. Yes they are not great at iterating and keeping long contexts, but you look at what he’s describing and you have to agree that’s exactly the type of problem llm excel at
Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself
> Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself
I'd rather assume good faith, because when I first started using LLMs I was incredibly confused what was going on, and all the tutorials were grating on me because the people making the tutorials were clearly overhyping it.
It was precisely the measured and detailed HN comments that I read that convinced me to finally try out Claude, so I do my best to pay it forward :)
>Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself
Okay. Whip up your favorite model and report back to us with your prompts. I'm pretty anti-AI, but you're going to attract more bees with honey than smoke.
> I think a more productive thing would be to ask "What model are you using?" "Are you using it in chat mode or as a dedicated agent?" "Do you have an AGENTS.md or CLAUDE.md?"
In my case I'd have to say "Don't know, whatever VS Code's bot uses", and "no idea what those are or why I have to care".
The reason I ask about what model is I initially dismissed AI generated code because I was not impressed with the models I was trying. I decided if I was going to evaluate it fairly though, I would need to try a paid product. I ended up using Claude Sonnet 4.5, which is much better than the quick-n-cheap models. I still don't use Claude for large stuff, but it's pretty good at one-off scripts and providing advice. Chances are VS Code is using a crappy model by default.
> no idea what those are or why I have to care
For the difference between chat mode and agent mode, chat mode is the online interface where you can ask it questions, but you have to copy the code back and forth. Agent mode is where it's running an interface layer on your computer, so the LLM can view files, run commands, save files, etc. I use Claude in agent mode via Claude Code, though I still check and approve every command it runs. It also won't change any files without your permission by default.
AGENTS.md and CLAUDE.md are pretty much a file that the LLM agent reads every time it starts up. It's where you put your style guide in, and also where you have suggestions to correct things it consistently messes up on. It's not as important at the beginning, but it's helpful for me to have it be consistent about its style (well, as consistent as I can get it). Here's an example from a project I'm currently working on: https://github.com/smj-edison/zicl/blob/main/CLAUDE.md
I know there's lots of other things you can do, like create custom tools, things to run every time, subagents, plan mode, etc. I haven't ever really tried using them, because chances are a lot of them will be obsolete/not useful, and I'd rather get stuff done.
I'm still not convinced they speed up most tasks, but it's been really useful to have it track down memory leaks and silly bugs.
The problem is that I want something that listens on a TCP connection for GD92 packets, and when they arrive send appropriate handshaking to the other end and parse them into Go structs that can be stuffed into a channel to be dealt with elsewhere.
And, of course, something to encode them and send them again.
How would I do that with whatever AI you choose?
I'm pretty certain you can't solve this with AI because there is literally no published example of code to do it that it can copy from.
No idea what you’re talking about but if it has a spec then it doesn’t matter if it’s trained on it. Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it. Use a planning mode and keep the context small and focused on each segment of the process.
You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself. Llms struggle with large code bases which pollute the context not straightforward apps like this
One other thing, it might be worthwhile having the spec fresh in the LLM's context by downloading it and pointing the agent at it. I've heard that that's a fruitful way to get it to refresh its memory.
> GD92 packets? No idea what you’re talking about but if it has a spec then it doesn’t matter if it’s trained on it.
Okay, so you're running into the same problem that LLMs are.
> Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it.
So I have to do lots of grunt work?
> You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself
I've written dozens of things that deal with TCP. I already have a fully-working example of what I want. The idea was to test if I could recreate it using LLMs.
How is it supposed to work? How does it put in the code I already know I want?
>Okay, so you're running into the same problem that LLMs are.
I can't tell if you are a troll or not, but you can't complain that nobody understands your intentionally vague and obtuse way to describe the problem at hand to pretend you're superior.
You have to rename the file ending to PDF. It's probably the wrong spec, because I'm basing this research on literally four letters that could mean anything since there is zero context given here. I've also found some German documents about chemistry.
If your argument is that LLMs and humans are stupid because they don't know what a "GD92" is, then yeah maybe it's a you problem.
Go and throw the spec into openai codex inside limactl (get it from GitHub) and use zed (the editor) and a SSH remote project to get inside the VM, don't forget to enable KVM for performance. The free tier for openai is fine, but make sure to use codex 5.2.
First ask questions on what the binary encoding is based on. It's probably X.400, then once you've asked enough questions, tell it to implement it. You probably won't have to read the spec at all yourself.
Consider the idea of trying to determine how quickly an unknown number of timers will go ping, It could be 10,000 timers that go ping when finished or 1,000,000 timers that go ping when finished. I don't know when they are going to go ping, just that they all the timers are running at different speeds spread over some distribution.
After one time period, 5,000 pings have been detected. Should you conclude that timers are pinging fairly quickly?
You cannot tell the overall duration of timers if you don't know the number of timers there are out there. Your only data that the timer exists is the ping, consequently you cannot tell if a small population is at high speed or a large population is at a moderate speed. In both cases the data you receive are the fastest of the population.
In other words we haven't yet seen what the 10 year project made using these tools is like (or even if it exists/will exist), because they haven't been around for 10 years.
On the contrary, you can solve the tree problem with money. There are nurseries that sell mature trees -- most people though will not choose to spend $20k on a tree.
But anyhow, you can buy large-ish burlapped trees but they aren’t as healthy, often die, and nothing close to a 100+ yr old estate oak tree or a decades old rose garden. You just can’t make it faster, transplanting plants that old will kill them.
Besides nitpicking, even your original point isn't even true. You cannot transplant a 100 year old tree (which has not been constrained in size dramatically) and expect it to survive for any reasonable length of time.
You won't find a 50 year old American Chestnut in a nursery, lol.
And forget about 20k. If you find someone willing to sell their tree you're looking at at least 10x that for the logistics of moving a 20 ton root system.
We value human ingenuity and effort. If there was a button "create an Oscar-worty movie" anyone could press it would make a paradox. The trick is that this won't render film industry useless, since we watch movies only when we believe they're worth our time, which is not true for zero-effort content.
It’s a tool, and now part of the culture—so people are naturally using it.
What it seems to reveal is less about the tool and more about us. The fragmentation was already there.
Maybe the response is to slow down a bit—revisit what matters, and use it with some sense of proportion and coherence.
The part about open source projects needing years of sustained work rings true for sure, but it kind of skips over why a lot (most?) projects die. Sometimes the author gets bored, sure, but maintaining something used by strangers is a completely different job than building something for yourself, and nobody warns you about that transition.
Some of the items listed in the "takes time" list say the beginning are not great examples. They are better emblems of artificial scarcity, especially Hermes bags.
You may speed them up when you start, but eventually, you will likely get to pay back the time. That is not necessarily a bad thing; it is just what it is.
AI makes us move faster, but if one is not careful, they may only be moving faster in the wrong direction, and they will eventually spend time moving back.
I used to be all about frictionlessness. Speed! Convenience! Make it smoother!
Then I found... actually, paper based systems work way better for me. Digital systems just turn into big piles of bloat. It's too easy to add stuff. So they grow until they collapse under their own weight.
(Take a look at your contacts list. How many should still be in there? How many did you add for a one time thing and then keep forever? Should there be a temporary folder? Shouldn't it be the default? That's how it works in nature!)
Ended up using paper as a temporary improvisation, then realized it solved all the problems I had with digital systems.
Friction is good.
With communication, it used to cost money to communicate. Now it's free, and we now have a sea of noise where most messages are "adding negative value", because they steal your time and energy.
Same with the app stores. Do any search and you find an ocean of slop! The gems drown in the sea of slop.
>Nobody is going to mass-produce a 50-year-old oak. And nobody is going to conjure trust, or quality, or community out of a weekend sprint.
absolutely although i wonder how different 'trust' is in the culture of tomorrow? will it 'matter' as much, be as cherished, as earned over the fullness of time?
i suspect it is a pendulum - and we are back to oak trees at some point - but which way is the pendulum swinging right now?
> I’m also increasingly skeptical of anyone who sells me something that supposedly saves my time.
Imagine a world in which the promise of AI was that workers could keep their jobs, at the same compensation as before, but work fewer hours and days per week due to increased productivity.
What could you do with those extra hours and days? Sleep better. Exercise more. Prepare healthy meals. Spend more time with family and friends. The benefits to physical and mental well-being are priceless. Even if you happened to earn extra money for the same amount of work, your time can be infinitely more valuable than money.
Unfortunately, that's not this world. Which is why the "increased productivity" promise doesn't seem to benefit workers at all.
If you look at the technological utopias that people imagined 50, 60+ years ago, they involved lives of leisure. If you would have told them that advances in technology would not reduce our working hours at all, maybe they would have started smashing the machines back then. Now we're supposed to be happy with more "stuff", even if there's no more time to enjoy stuff.
> Consider a typical working day in the medieval period. It stretched from dawn to dusk (sixteen hours in summer and eight in winter), but, as the Bishop Pilkington has noted, work was intermittent - called to a halt for breakfast, lunch, the customary afternoon nap, and dinner. Depending on time and place, there were also midmorning and midafternoon refreshment breaks. These rest periods were the traditional rights of laborers, which they enjoyed even during peak harvest times. During slack periods, which accounted for a large part of the year, adherence to regular working hours was not usual. According to Oxford Professor James E. Thorold Rogers[1], the medieval workday was not more than eight hours. The worker participating in the eight-hour movements of the late nineteenth century was "simply striving to recover what his ancestor worked by four or five centuries ago."
> The contrast between capitalist and precapitalist work patterns is most striking in respect to the working year. The medieval calendar was filled with holidays. Official -- that is, church -- holidays included not only long "vacations" at Christmas, Easter, and midsummer but also numerous saints' andrest days. These were spent both in sober churchgoing and in feasting, drinking and merrymaking. In addition to official celebrations, there were often weeks' worth of ales -- to mark important life events (bride ales or wake ales) as well as less momentous occasions (scot ale, lamb ale, and hock ale). All told, holiday leisure time in medieval England took up probably about one-third of the year. And the English were apparently working harder than their neighbors. The ancien règime in France is reported to have guaranteed fifty-two Sundays, ninety rest days, and thirty-eight holidays. In Spain, travelers noted that holidays totaled five months per year.[5]
> The peasant's free time extended beyond officially sanctioned holidays. There is considerable evidence of what economists call the backward-bending supply curve of labor -- the idea that when wages rise, workers supply less labor. During one period of unusually high wages (the late fourteenth century), many laborers refused to work "by the year or the half year or by any of the usual terms but only by the day." And they worked only as many days as were necessary to earn their customary income -- which in this case amounted to about 120 days a year, for a probable total of only 1,440 hours annually (this estimate assumes a 12-hour day because the days worked were probably during spring, summer and fall). A thirteenth-century estime finds that whole peasant families did not put in more than 150 days per year on their land. Manorial records from fourteenth-century England indicate an extremely short working year -- 175 days -- for servile laborers. Later evidence for farmer-miners, a group with control over their worktime, indicates they worked only 180 days a year.
I love this piece and read to the end. Currently working on an idea. Started with Claude and it made a mess. Now enjoying doing it by hand. It just feels easier! AI is assisting on blockers now noy writing code.
Anyway 2 areas I slightly disagree on.
Open source abandonware is fine. Sometimes people give up because they realize it is not a good idea. Or they get busy or sick.
And 10 years at a startup is great but that relies on it being a good startup. Entropy at companies means I have never made it to 10yrs even though I wanted to.
I don't disagree with the sentiment, but I think the signals that we use to determine whether we're doing the right things are different with the new AI enhanced toolsets.
Refactoring decent sized components are an order of magnitude easier than it was, but the more important signal is still, why are you refactoring? What changed in your world or your world-view that caused this?
Good things still take time, and you can't slop-AI code your way to a great system. You still need domain expertise (as the EXCELLENT short story from the other day explained, Warranty Void if Regenerated (https://nearzero.software/p/warranty-void-if-regenerated) ). The decrease in friction does definitely allow for more slop, but it also allows for more excellence. It just doesn't guarantee excellence.
> We know this intuitively. We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them. Either because of the time it took to build them or because of their age.
Oh, I thought it was because they're a way to show off about being rich.
> We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Even if she could reach the pedals, my 4yo doesn't have the attention span to drive. This isn't a "lived experience" thing, it's a physical brain development thing. IIRC the are effects with learning math, where starting earlier had limited impact on being able to move to certain more advanced topics earlier; ie there's more going on than just hours of experience.
The standard age for voting is also the age for being a legal adult. There are sound logical reasons that these ages should match.
The standard drinking age is due to pressure by activists, and AIUI is lower in other countries.
> Oh, I thought it was because they're a way to show off about being rich.
Maybe for some. I think these examples were carefully chosen. Hermès are made in France, "Swiss watch" doesn't automatically mean Rolex, though in that case Rolex does own most of their manufacturing (though there is a whole world of carefully made watches out there that don't cost 10K). As for old properties... there is a huge range there, but unless you are living in a castle, most people, at least my city, are likely silently thinking: "I'm so sorry for them that they have to live in that old house."
Programmers no longer have any leverage now they can all be replaced by machines. It doesn't matter how productive you are, the system will always demand more.
I think it's hard to argue with the idea that we should slow down and think more, and that AI is pushing us to do the opposite. But time is limited, it's very limited. And at least in a professional setting, to spend time on the correct things is key.
What AI allow us is to do those things we would not have been able to prioritize before. To "write" those extra tests, add that minor feature or to solve that decade old bug. Things that we would never been able to prioritize are we noe able to do.
It's not perfect, it's sometimes sloppy, but at least its getting shit done. It does not matter if you solve 10% of your problem perfect if you never have time for the remaining 90.
I do miss the coding, _a lot_, but productivity is a drug and I will take it.
> Trees take quite a while to grow. If someone 50 years ago planted a row of oaks or a chestnut tree on your plot of land, you have something that no amount of money or effort can replicate. The only way is to wait. Tree-lined roads, old gardens, houses sheltered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.
This is a bad start. Louis XIV at Versailles and Marly famously made while forests appear or disappear overnight, to the utter dismay of Saint-Simon, the memorialist, who thought this was an unacceptable waste of money and energy.
And this was before the industrial revolution. Today I'm sure many more miracles happen every day.
It's worth noting that just because something takes time doesn't mean it's automatically worth doing.
Vibe slop-ing at supersonic speeds and waiting years to grow aren't the only options, there's something in between where you have enough signal to keep going and enough speed to not waste years on the wrong thing.
I feel that today's VCs have completely disregarded the middle and are focused on getting as big as possible as fast as possible without regard to the effect it's having on the ecosystem.
I don't see the problem - everything the author describes has, and will always be, true. You can't vibe code anything of value in a weekend exactly because anyone _else_ with the same level of experience can do the exact same thing in the same weekend! This has always been true across all trades and technologies. Once again, the domain expertise, wisdom, and simply _time_ of doing something always win. LLMs literally don't change that at all.
lots of things take days, not hours. And idt AI changes that much. It does let you (or - let's be real - your middle management) try to make it happen with hours tho :P
I feel for the larger companies and the people who started 10 years ago, though.
They have spent the last decade building processes and guardrails for getting consistent average performance from people. But now, some talented people who worked at those companies are building their own new companies without the overhead and moving much, much more quickly.
I think what we assume is "vibe slop at inference speed" is not as simple as people make it out to be. From a perspective, I think generally it might be people trying to save jobs.
I'm seeing more slop come out of larger, older companies than the new ones (with experienced operators).
And the speed is somewhat scary. For smaller team it doesn't take as much effort to build deep, beautiful product anymore.
The bottleneck was never the ability for a engineer to code. It was the 16 layers between the customer and the programmer which has vanished in smaller companies and is forcing larger ones to produce slop.
I'm reading Against The Machine by Paul Kingsnorth, and now reading this blog piece is hard not to make connections with the points of the book: the usage of the tree as a counter-argument for the machine's automation credo exposed in the blog post very much aligns with I've read so far.
We're gonna have to learn some lessons from other engineering fields in this regard. Electrical, civil, mechanical, aerospace... They've all had to put processes in place to intentionally slow things down for a long time. I could throw a circuit board layout together 1000x faster then a team of engineers could have 50 years ago, but that industry has developed a culture of rigorous review processes to ensure quality, which means I couldn't actually move nearly as fast as possible.
Undoubtedly a lot of that comes down to production cost and safety. A plane is far more likely to kill people and it costs a shitload more to produce then an app (though plenty of software is mission critical). But now in software we can move quick enough up front that if we don't start applying some discipline it's going to bite us in the ass in the long run.
It takes ten years to track down a really strange good idea, from first whiff to laying your hands on it. There's no getting around that. And it is neither profitable nor efficient.
Social connections. Trust. Facetime. All matter more than ever.
Want a moatable software business? Know your customers on a personal level. Have a personal relationship. Know the people that sign the contracts, know their kids names, where they vacationed last winter, their favorite local restaurant.
“The power of doing anything with quickness is always prized much by the possessor, and often without any attention to the imperfection of the performance.”
I admit that it’s a conflict and I don’t know if I have the right answers. I cannot help but see the good and bad in these things. Rejecting it outright is unlikely to help.
> We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Not true, we do this because the 99% of the time it's true, however there are people who would be perfectly competent and responsible to drive without living to the age of 16-18. Same with voting, there are humans who have a deep understanding and intelligence about politics at a younger age than suffrage. Equally there are people who will be reckless drivers at 40 and vote on whim at 60.
We have these rules not because sophistication only comes through lived experience, we have them because it's strongly correlated and covers of most error cases.
To take this to AI, run the model enough times with a higher enough temperature, then perhaps it can solve your challenges with a high enough quality - just a thought.
I’ve been noticing that this simple reality explains almost all of both the good and the bad that I hear about LLM-based coding tools. Using AI for research or to spin up a quick demo or prototype is using it to help plot a course. A lot of the multi-stage agentic workflows also come down to creating guard rails before doing the main implementation so the AI can’t get too far off track. Most of the success stories I hear seem to be in these areas so far. Meanwhile, probably the most common criticism I see is that an AI that is simply given a prompt to implement some new feature or bug fix for an existing system often misunderstands or makes bad assumptions and ends up repeatedly running into dead ends. It moves fast but without knowing which direction to move in.
This is a real problem when the "direction" == "good feedback" from a customer standpoint.
Before we had a product person for every ~20 people generating code and now we're all product people, the machines are writing the code (not all of it, but enough of it that I will -1 a ~4000 line PR and ask someone to start over, instead of digging out of the hole in the same PR).
Feedback takes time on the system by real users to come back to the product team.
You need a PID like smoothing curve over your feature changes.
Like you said, Speed isn't velocity.
Specifically if you have a decent experiment framework to keep this disclosure progressive in the customer base, going the wrong direction isn't a huge penalty as it used to be.
I liked the PostHog newsletter about the "Hidden dangers of shipping fast", I can't find a good direct link to it.
https://newsletter.posthog.com/p/the-hidden-danger-of-shippi...
This tayloristic idea (which has now reincarnated in "design thinking") that you can observe someone doing a job and then decide better than them what they need is ridiculous and should die.
Good products are built by the people who use the thing themselves. Doesn't mean though that choosing good features (product design and engineering) isn't a skill in itself.
As dev team we were able to crank the speed even more and silly product people thought they are doing something good by demanding even more from us. But that was one of the instances where users were helpful :).
People use dozens of apps every day to do their work. Just think about how are you going to make time to give feedback to each of each.
That's pretty much solved by the size of the audiences. You won't give feedback on 12 apps, but 11 other people will probably do so on 11 different apps.
Of course, the issue with my domain is that there's plenty of feedback, and product owners just dismiss it. Burn down your entire portfolio to get that boosted shareholder value for the next earnings report.
Well by asking repeatedly of course but you just piss people off.
Have you ever given feedback to Atlassian, Google, Microsoft?
I suppose there is an argument that if you are building the wrong thing, build it fast so that you can find out more quickly that you built the wrong thing, allowing you to iterate more quickly.
Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
And frankly, they were in point.
Especially in the B2B context stability is massively underrated by the product.
There is very little I hate more then starting my work week on a Monday morning and find out someone changed the tools I'm using for daily business again
Even if it's objectively minor like apples last pivot to the windows vista design... It just annoys me.
But I'm not the person paying the bills for the tools I'm using at work, and the person that is almost never actually uses the tools themselves and hence shiny redesigns and pointless features galore
Way too often that is used as an excuse for various forms of laziness; to not think about the things you can already know. And that lack of thinking repeats in an endless cycle when, after your trial and error, you don't use what you learned because "let's look forward not backward", "let's fail fast and often" and similar platitudes.
Catchy slogans and heartfelt desires are great but you gotta put the brains in it too.
I doubt GP is suggesting ‘go ahead and be negligent to feedback and guardrails that let you course correct early.’
Plugging the Cynefin framework as a useful technique for practitioners here. It doesn’t have to be hard to choose whether or not rigorous planning is appropriate for the task at hand, versus probe-test-backtrack with tight iteration loops.
A lot of people are so enamored by speed, they are not even taking the time to carefully consider the full picture of what they are building. Take the HN frontpage story on OpenCode: IIRC, a maintainer admitted they keep adding many shallow features that are brittle.
Speed cannot replace product vision and discipline.
This won't really stop until investors start judging on quality and not quantity. But a lot of those are thinking in finances, and the thought of removing their biggest cost center is too tempting to not go all in on. So they want to hear "we made this super fast with 2-3 people!" instead of "we optimized and scaled this up to handle 400% more workload with double the performance".
Discovery is great and all but if what you discover is that you didn't aim well to begin with that's not all that useful.
Exactly this. Velocity is a vector. It has magnitude (aka speed) and direction.
Our industry has chased magnitude over all else for so long. Now we can put nitro in everyone's car and we get to where we wish to go very fast. Suddenly bad direction-setting is getting feedback where there used to be friction and natural time to steer.
My greatest hope is that a ton of bad leaders and middle managers end up finally getting exposed due to the advent of AI. (Will I be disappointed? Almost certainly yes.)
This reminded me of the idea that civilization is already a misaligned superintelligence, and that technology (incl. AI) just moves it faster in the wrong direction.
That's basically the problem of supermorality. If you're an actually benevolent AI, do you do what civilization tells you? Or do you do what is good? What happens if you disagree?
Thus far, AI has been good... For venture capitalists. Jury's out if it's good for humanity and civilization at large. There have been a lot of benevolent usages of AI thus far, but also a lot of bad.
As for those who disagree with the "benevolent AI," I think they just get sent to the gallows (either metaphorically or literally)
In the last year or so I've been able to prototype it and accelerate the development quite significantly using Claude and pals, and now it is very close to a finished product. One one hand there's no doubt in my mind that the LLM tools can make this sort of thing faster and let you churn through ideas until you find the right ones, but on the other hand, if I hadn't had that slow burn of mostly just thinking about it conceptually for 10 years, I would have ended up vibe coding a much worse product.
The number of times I've seen them get the wrong end of the stick in their COT is ridiculous.
Even when I tell them to only implement after my explicit approval they ignore this after 2 or 3 followups and then it's back to them going down blind alleys.
It also moves fast with a tendency to pick the wrong direction (according to the goal of the prompter) at every decision point (known or unknown).
A proper capitalist system will tend toward the right direction as directed by the market yea? All of this neuroticism about AI doesn't matter.
Speed actually just wins, because we are usually constrained by time.
Working or useful software? AI hasn't produced any at all since 2023.
Sorry, but I don’t understand what you mean here. What do we win by being faster at producing the wrong things?
1) a lot of shallow, orthogonal directions is better than 1 deep, careful approach
2) There's no social aspect to churning out a bunch of slop that will affect the perception of potential "right things" later. My domain can be particularly grudgeful in this regard.
2) I read that part twice and could not figure out what it is you are trying to say.
But I am not just dispatching to agents. I work interactively with a chat interface, and sometimes, I will just bin a whole hour's worth of back-and-forth, because we're not getting anywhere (in fact, I did exactly that, about 30 minutes ago).
But that hour is peanuts, compared to the ten hours that I would have spent, trying to figure it out on my own. With an LLM (and git), I can "run something up the flagpole, and see who salutes." I can afford to experiment with very large code bases, and toss out a whole bunch of stuff, if need be.
That said, I know damn well, that quite a few folks here, would sneer at my methodology, as "awkward, stodgy, and slow." Nevertheless, I am pretty chuffed with the results. Yeah, it's slower than some folks would do it, but the Quality is really high, and I'm happy with the results.
My favorite thing to do, is (for example) toss all 5 of my SDK files into the LLM, paste in the JSON server interaction, describe the bug, and ask it to help me figure it out.
Nine times out of ten, it finds the bug quickly. The real bug. I am not always happy with the proposed solutions, but finding the root cause is always the time-consuming part.
One more thing I try is give same prompt once in a while to ChatGPT/Gemini/Grok. Than take the 2 out of 3 ideas forward.
All leading AI seems to have some blind spot. Like some kind of intricsic character. 1 will completely overlook some particular bug while finding other excellent bugs and edge cases.
Getting the code through all 3 before commiting has shown excellent results to me.
>toss all 5 of my SDK files into the LLM, paste in the JSON server interaction, describe the bug, and ask it to help me figure it out.
I wonder if this "quality" code wouldn't have that many bugs to dive into if it was more carefully considered and produced up front?
This harkens back to a study in 2024 were senior devs were actually less productive with LLMS but they felt more productive, even after being told they were less productive.
I'm certain this is true for me. The only thing LLMs do get me is the ability to make forward progress on tasks while I'm in meetings, which is a net positive at least.
Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
I've been using AI to help me write it and I've come to a couple conclusions:
- AI can make working PoCs incredibly quickly
- It can even help me think of story lines, decision paths etc
- Given that, there is still a TON of decisions to be made e.g. what artwork to use, what makes sense from a story perspective
- Playtesting alone + iterating still occurs at human speed b/c if humans are the intended audience, getting their opinions takes human time, not computer time
I've started using this example more and more as it highlights that, yes, AI can save huge amounts of time. However, as we learned from the Theory of Constraints, there is always another bottleneck somewhere that will slow things down.
Coming up with a genuinely interesting gameplay loop with increasing difficulty levels and progressively revealed gameplay mechanics is a fascinating and extremely difficult challenge, no matter how much AI you throw at the problem.
If LLMs (or other "AI" or even AI tools) are able to exactly replicate the behavior of a program (game or otherwise) without access to its source code, that's technologically cool. However, that means it's possible to cheaply replicate immense amounts of human work in a way the law does not cover.
If you take a game and use LLMs to reimplement both its assets and code from scratch but players have the same movement, weapons do the same damage, have the same spread and projectile speed, and so on, then the "new" game is not really new, it's based on other people's work. And nobody should be allowed to profit from other people's work without their consent and without compensating them.
Obviously, work is hard to quantify but that doesn't mean we should give up.
2) We can also look at it from a more utilitarian perspective. When something starts as closed source, people who made it got paid already and the owners (who often did not perform any useful work except putting in money) keep making money from then on. Reimplementing it as open source does not harm the original devs but allows more people to access it and it also often leads to a much more open and pro-social implementation without dark patterns. And the paid version often still has an advantage due to existing awareness, marketing and network effects.
OTOH when something starts as free/open under conditions such as anyone building on top of it has to release under the same conditions, then a company taking that work is violating explicitly stated wishes, is making money which doesn't reach the original devs and does not promote the original work. And it also has the aforementioned advantages. When the closed version eclipses the open one, the owners are free to add dark patterns and otherwise exploit their position further.
This way open work is a global social good, closed work is only good for those who own it.
---
I prefer argument 1 because it doesn't require the presence of exploitative power structures.
Either way, we should recognize there are multiple dimensions to compensation - here recognition and money. And work should be rewarded along both axes transitively.
„I was able to vibecode those 5 apps I always wanted but never had time to code them myself … it is so different now because — I don’t have time to use them”.
An example being the common attitude that [advanced tech] is just a math problem to be solved, and not a process that needs to play itself out in the real world, interacting with it and learning, then integrating those lessons over time.
Another way to put this is: experience is undervalued, and knowledge is overvalued. Probably because experience isn’t fungible and therefore cannot be quantified as easily by market systems.
1. Probably not his original idea, and now that I think about it this is kind of more Hegelian. I’m not familiar enough with Hegel to reference him though.
Resource to be exploited. That’s worse of course.
There’s a reason investors call non-technical executives taking over engineer-founded companies adult supervision, and it’s not to make sure the engineers eat their vegetables. Developers love to imagine themselves as sort of semi-for-profit-pseudo-academics with their conferences and white papers and FOSS projects, but where the rubber hits the road, we know who tells who what to do. It’s not the 90s anymore.
You fill a jar with sand and there is no space for big rocks.
But if you fill the jar with big rocks, there is plenty of space for sand. Remove one of the rocks and the sand instantly fills that void.
Make sure you fit the rocks first.
You fill the bottle with half of the water, you put the fish in, you can fill in the other half. If you start with the first half, you will end up with more water.
Yes, there are bad metaphors, and people who take metaphors too seriously. That you can conjure a bad metaphor with somewhat similar to semantics to some other metaphor does not mean that said metaphor is bad.
then you fill 3 liter bottle again, and pour the contents into the 5 liter bottle until the 5 liter one is full
empty the 5 liter bottle, and pour the 1 liter in the 3 liter bottle into the 5 liter bottle
fill the 3 liter bottle again and pour that into the 1 liter already in the 5 liter bottle to get 4 liters of water
That water overflow step is missing / implicit. But that's an observable event.
> Given a 3-liter container and a 5-liter container, both initially empty, and access to tap water, how can you measure exactly 4 liters of water without using any additional containers
I've offered and received some convoluted metaphors recently, love leaning hard into this one.
Not sure, I used to be better at diagnosing this type of episode.
Lost me in paragraph three. We pay for those things because they're recognizable status symbols, not because they took a long time to make. It took my grandmother a long time to knit the sweater I'm wearing, but its market value is probably close to zero.
The fact that those items took a long time to make is part of what makes them status symbols though, because if you pay a lot of money for something that took no time to make at all (see most NFTs) you look like an idiot to a lot of people.
This sort of thing was done at a time when everybody did it, and now that it's not done, nobody does it
No kid ever said "did you see the sweater that Timmy's grandma knitted for him? That kid is so cool! "
Mostly because they all had grams sweaters as well.
I don't know what term you were looking for, but a handmade present for someone dear is about the furthest thing from a "status symbol" that I can think of:
- it can't be bought
- it can't be transferred without losing almost all value (ie: it's only valuable to you, or at most your family, eBay doesn't want it)
- it provides no improvement whatsoever in one's social standing
I can't connect it at all to your listed points. An Olympic medal is about obvious a status symbol as I can imagine but it can't (meaningfully) be bought or transferred.
The status signified with a knit sweater is membership (and good standing!) in a caring family with elders not yet fully subsumed into their phones.
People, acquaintances and strangers alike, frequently comment on the knit socks I often wear, ask after who made them, and all of a sudden we're on "how's your mom" terms.
https://www.ebay.com/b/Olympic-Medal/27291/bn_55191416?_sop=...
> People, acquaintances and strangers alike, frequently comment on the knit socks I often wear,
Ok, that explains pretty much everything about your line of thought.
Thanks.
> https://www.ebay.com/b/Olympic-Medal/27291/bn_55191416?_sop=...
Of course you can buy an Olympic medal. You can't buy the status conferred by the medal (of Olympic champion / nth runner up).
> Ok, that explains pretty much everything about your line of thought.
I don't understand this either. Are you insulting me?
I'm also completely unimpressed by someone wearing a Rolex though, so different mileage for different people.
Understanding words does not require being impressed by anything, nor caring about the opinion of kids.
The old rich doesn't give a shit about Rolex watches beyond noticing the newb rich using them to tell on themselves.
If people don't consider that someone with more money is of a higher status then symbols of that wealth aren't meaningful.
I think a lot of people have an ingrained belief that "more money == more status"
Parents want to signal "this child is looked after and we have a lot of capacity". Clothes, lunches, a lot of things are quietly like this.
A good hand-made gift demonstrates the status of the giver and provides proof-of-work for their regard of the recipient.
The sweater is with whatever value a single person values it as or would pay for it. Said another way, would you sell it to me for $10? 50? 100? If you said no to all three, it's worth at least $100.
Some are indeed via marketing, but any itema have intrinsic or at least, emotional value.
https://youtu.be/02CjWIkTy-M
Agentic coding very much feels like a "video game" in the sense of you pull the lever and open the loot box and sometimes it's an epic +10 agility sword and sometimes its just grey vendor trash. Whether or not it generates "good" or even "usable" code fades to the background as the thrill of "I just asked for a UI to orchestrate micro services and BLAMMO there it was!" moves to the fore.
Think about the analogy of transaction speed of money transfer vs actual delivery of good. With AI, we would make all digital tasks instantaneous, but the physical world will hum along at its own speed unless we speed it up with dark factories and what not.
I expect these tools will quickly let people to ramp up several orders of magnitude of more complexity and lines of code to any software project.
The your 100kloc JS electron app will become a 10m loc JS electron app running on a 500m loc browser runtime.
Repeat this across the stack for every software component and application and library. If you think things are bloated now just wait a few years and your notepad will be a 1m line behemoth with runtime performance of a glacier.
Creating these wrong things is only cheaper with LLMs. Since developers now spend less time and effort to create that wrong thing, they don't feel the need validate or reflect on them so much.
The risk is not the tool itself, but the over-reliance on it and forgoing feedback loops that have made teams stronger, e.g. debugging, testing, and reasoning why something works a particular way.
I think of it differently. Speed is great because it means you can change direction very easily, and being wrong isn't as costly. As long as you're tracking where you're going, if you end up in the wrong place, but you got there quickly and noticed it, you can quickly move in a different direction to get to the right place.
Sometimes we take time mostly because it's expensive to be wrong. If being wrong doesn't cost anything, going fast and being wrong a lot may actually be better as it lets you explore lots of options. For this strategy to work, however, you need good judgment to recognize when you've reached a wrong position.
Btw the earendil.com website is gorgeous.
It really is but it's come at the cost of actually being useful. It has a vague 'about' modal and that's your lot, which is confusing since they're encouraging people to join. I'm just not sure anybody's going to know what.
It's a surprisingly good filter and gets curious people to send mail in :)
This website does not resonate with the message I got earlier from the article. It does not give the impression of someone taking appropriate time to make it.
Yes, you cannot build years of community and trust in a weekend. But sometimes it's totally sufficient to plant a seed, give it some small amounts of water and leave it on its own to grow. Go ask my father having to deal with a huge maple tree, that I’ve planted 30 years ago and never cared for it.
Open Source projects sometimes work like this. I've created a .NET library for Firebase Messaging in a weekend a few years ago… and it grew on its own with PRs flowing in. So if your weekend project generates enough interest and continues to grow a community without you, what’s the bad thing here? I don’t get it.
Sometimes a tree dies and an Open Source project wasn’t able to make it.
That said, I’ve just finished rewriting four libraries to fix long standing issues, that I haven’t been able to fix for the past 10 years.
It's been great to use Gemini as a sparring partner to fix the API surface of these libraries, that had been problematic for the past 10 years. I was so quick to validate and invalidate ideas.
Once being one of the biggest LLM haters I have to say, that I immensely enjoy it right now.
Can't really say the same for vibecoding. You still need to do a lot of work that's ultiamtely putting lipstick on a pig. Maybe someone talented can make it pretty, but it has a quality ceiling, and most won't get anywhere close to that; people will just see a pig with lipstick on it.
What's funny, is that I had many other projects attempted over the years, but many rise and die quickly, yet the one that lasted the longest is also the one that is likely to last longest from now.
[0]: https://uxwizz.com
I feel this new world sucks. We have new technology that boosts the productivity of the individual engineer, and we could be doing MUCH better work, instead of just rushed slop to meet quotas.
I feel I'm just building my replacement, to bring the next level of profits to the c-suite. I just wish I wasn't burning out while doing so.
I don’t think it’s exclusive to startups or tech either, it seems more like a downstream consequence of the fact that there’s no real innovation anymore. Capitalism demands constant growth, and when there are real technological improvements you can achieve that growth through higher productivity. If there are none, you have to achieve that growth through other means like forcing employees to work longer or cutting costs. The alpha is all coming from squeezing the labor force right now.
This doesn't sound right to me. We are currently getting smacked upside the head by an enormous technological innovation. I believe that, even within the framework of capitalism, this problem has social and political roots. The "robber baron" period late 19th century America has strong similarities to what we are seeing today, and technological stagnation was not the cause.
And as of now, we are not having "technological innovation". We found a new jackhammer and are tearing up the entire house experimenting with it. Maybe when the "shiny new thing" effect wears off we'll get true innovation. But as of now people are just getting paid to show off jackhammers.
I started out building a full stack framework like Meteor framework (though I started before Meteor framework was created in 2012 and long before Next.js).
Then I ported it to to Node.js because I saw an advantage to having the same language on the frontend and backend.
Then I noticed that developers like to mix and match different libraries/modules and this was a necessity. The whole idea of a cohesive full stack framework didn't make sense for most software. So I extracted the most essential part of it that people liked and this became SocketCluster. It got a fair amount of traction in the early days.
At the time, some people might have thought SocketCluster was trying to be a more scalable copycat of Socket.io but actually I had been working on it for several years by that point. I just made the API similar when I extracted it for better compatibility with Socket.io but it had some additional features.
A few years ago, I ended up building a serverless low-code/no-code CRUD platform which removes the need for a custom backend and it can be used with LLMs directly (you can give them the API key to access the control panel). It can define the whole data schema for you. I've built some complex apps with it to fully prove the concept with advanced search functionality (including indexing with a million records).
I've made some technical decisions which will look insane to most developers but are crucial and based on 15 years of experience, carefully evaluating tradeoffs and actual testing with complex applications. For example my platform only has 3 data types. String, Number and Boolean. The string type supports some additional constraints to allow it to be used to store any kind of data like lists, binary files (as base64)... Having just 3 types greatly simplifies spam prevention and schema validation. Makes it much easier for the user (or LLM) to reason about and produce a working, stable, bug-free solution.
That said I've been struggling to sell it because there are some popular well funded solutions on the market which look superficially similar or better. Of course they can't handle all the scenarios, they're more complex, less secure, don't scale, require far more LLM tokens, lead to constant regressions when used with AI. It's just impossible to communicate those benefits to people because they will value a one-shotted pretty UI over all these other aspects.
You can check out https://saasufy.com/ if interested.
Saasufy itself isn't open source. I'm planning to sell licenses of the code (a limited number of them to make it scarce). SocketCluster is a core component of Saasufy. The goal did evolve slightly; originally, it was to make it easier to build full stack applications. Now it actually lets you build entire full stack apps without code. That bigger goal has been achieved. I have some videos linked from the Docs page showing how it works.
But yes, I'm a bit paranoid about my situation. I do feel like my work is suppressed by algorithms. Things feel very different for me now than they did before in terms of finding users. It's really hard to find people to try my work. Difficult even to convince them to watch a 10 minute video. Though I guess many people are in the same boat right?
The desire to make life faster instead of better was identified as a neurosis of civilization in the 19th century.
What's slower now are threats to production - even minor regulations take years or decades, and often appear only when workarounds have surfaced.
So what changed in the last 40+ years are the many tools for businesses to shape the conditions of their business -the downstream market, upstream suppliers, and regulatory support/constraints. This is extremely patient work over generations of players, sometimes by individuals, but usually by coalitions of mutual corporate self-interest, where even the largest players couldn't refuse to participate.
It's evolution.
I do wonder if productivity with AI coding has really gone up, or if it just gives the illusion of that, and we take on more projects and burn ourselves out?
Here's the thing: we never had a remotely sane way to measure productivity of a software engineer for reasons that we all understand, and we don't have it now.
Even if we had it, it's not the sort of thing that management would even use: they decide how productive you are based on completely unrelated criteria, like willingness to work long hours and keeping your mouth shut when you disagree.
If you ask those types whether productivity has gone up with AI, they'll probably say something like "of course, we were able to let go a third of our programmers and nothing really seems to have changed"
"Productivity" became a poisoned word the moment that the suits realized what a useful weapon it was, and that it was impossible to challenge.
Not impossible to challenge. But most people don't have the legal funds to do so. Those that do tend to get a cushy severance bribe to stay quiet and they move on elsewhere.
That's also why it's a long process to "fire" someone but easy to "lay off" instead. layoffs are never about productivity (so it doesn't matter anyway), and the US is doing absolutely nothing to protect against it like most of the world.
ps: it's strange that YouTubers are talking about the same thing. People in different dev circles. Agentic feels like doom ide scroll.
It doesn’t matter how fast we can make our widgets and chatbots when what you need is to have a self sufficient workforce. We have outsourced everything material and valuable for society. Now we are left with industries of gambling, ad machines and pharmaceuticals with a government that is functionally bankrupt and politicians that have completely sold out
It definitely hasn't for me. I spent about an hour today trying to use AI to write something fairly simple and I'm still no further forward.
I don't understand what problem AI is supposed to solve in software development.
When Russians invaded Germany during WWII, some of them (who had never seen a toilet) thought that toilets were advanced potato washing machines, and were rightfully pissed when their potatoes were flushed away and didn't come back.
Sounds like you're feeling a similar frustration with your problem.
Why is AI supposed to be good?
I ended up having to type hundreds of lines of description to get thousands of lines of code that doesn't actually work, when the one I wrote myself is about two dozen lines of code and works perfectly.
It just seems such a slow and inefficient way to work.
I've also been underwhelmed with its ability to iterate, as it tends to pile on hacks. So another useful question is "did you try having it write again with what you/it learned?"
Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself
I'd rather assume good faith, because when I first started using LLMs I was incredibly confused what was going on, and all the tutorials were grating on me because the people making the tutorials were clearly overhyping it.
It was precisely the measured and detailed HN comments that I read that convinced me to finally try out Claude, so I do my best to pay it forward :)
Okay. Whip up your favorite model and report back to us with your prompts. I'm pretty anti-AI, but you're going to attract more bees with honey than smoke.
Trying to trace back the quality of the model to the "skills" of the person sounds extremely manipulative.
In my case I'd have to say "Don't know, whatever VS Code's bot uses", and "no idea what those are or why I have to care".
The reason I ask about what model is I initially dismissed AI generated code because I was not impressed with the models I was trying. I decided if I was going to evaluate it fairly though, I would need to try a paid product. I ended up using Claude Sonnet 4.5, which is much better than the quick-n-cheap models. I still don't use Claude for large stuff, but it's pretty good at one-off scripts and providing advice. Chances are VS Code is using a crappy model by default.
> no idea what those are or why I have to care
For the difference between chat mode and agent mode, chat mode is the online interface where you can ask it questions, but you have to copy the code back and forth. Agent mode is where it's running an interface layer on your computer, so the LLM can view files, run commands, save files, etc. I use Claude in agent mode via Claude Code, though I still check and approve every command it runs. It also won't change any files without your permission by default.
AGENTS.md and CLAUDE.md are pretty much a file that the LLM agent reads every time it starts up. It's where you put your style guide in, and also where you have suggestions to correct things it consistently messes up on. It's not as important at the beginning, but it's helpful for me to have it be consistent about its style (well, as consistent as I can get it). Here's an example from a project I'm currently working on: https://github.com/smj-edison/zicl/blob/main/CLAUDE.md
I know there's lots of other things you can do, like create custom tools, things to run every time, subagents, plan mode, etc. I haven't ever really tried using them, because chances are a lot of them will be obsolete/not useful, and I'd rather get stuff done.
I'm still not convinced they speed up most tasks, but it's been really useful to have it track down memory leaks and silly bugs.
Okay. Get me a job and I'll pay for any model of your choosing. Until then, finances are very slim.
The problem is that I want something that listens on a TCP connection for GD92 packets, and when they arrive send appropriate handshaking to the other end and parse them into Go structs that can be stuffed into a channel to be dealt with elsewhere.
And, of course, something to encode them and send them again.
How would I do that with whatever AI you choose?
I'm pretty certain you can't solve this with AI because there is literally no published example of code to do it that it can copy from.
No idea what you’re talking about but if it has a spec then it doesn’t matter if it’s trained on it. Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it. Use a planning mode and keep the context small and focused on each segment of the process.
You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself. Llms struggle with large code bases which pollute the context not straightforward apps like this
Okay, so you're running into the same problem that LLMs are.
> Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it.
So I have to do lots of grunt work?
> You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself
I've written dozens of things that deal with TCP. I already have a fully-working example of what I want. The idea was to test if I could recreate it using LLMs.
How is it supposed to work? How does it put in the code I already know I want?
I can't tell if you are a troll or not, but you can't complain that nobody understands your intentionally vague and obtuse way to describe the problem at hand to pretend you're superior.
https://www.publiccontractsscotland.gov.uk/NoticeDownload/Do...
You have to rename the file ending to PDF. It's probably the wrong spec, because I'm basing this research on literally four letters that could mean anything since there is zero context given here. I've also found some German documents about chemistry.
If your argument is that LLMs and humans are stupid because they don't know what a "GD92" is, then yeah maybe it's a you problem.
Go and throw the spec into openai codex inside limactl (get it from GitHub) and use zed (the editor) and a SSH remote project to get inside the VM, don't forget to enable KVM for performance. The free tier for openai is fine, but make sure to use codex 5.2.
First ask questions on what the binary encoding is based on. It's probably X.400, then once you've asked enough questions, tell it to implement it. You probably won't have to read the spec at all yourself.
Remember, I've already written something that does this. I'm trying to understand how and why an LLM would help.
Which part of the job is the LLM supposed to do?
Consider the idea of trying to determine how quickly an unknown number of timers will go ping, It could be 10,000 timers that go ping when finished or 1,000,000 timers that go ping when finished. I don't know when they are going to go ping, just that they all the timers are running at different speeds spread over some distribution.
After one time period, 5,000 pings have been detected. Should you conclude that timers are pinging fairly quickly?
You cannot tell the overall duration of timers if you don't know the number of timers there are out there. Your only data that the timer exists is the ping, consequently you cannot tell if a small population is at high speed or a large population is at a moderate speed. In both cases the data you receive are the fastest of the population.
In other words we haven't yet seen what the 10 year project made using these tools is like (or even if it exists/will exist), because they haven't been around for 10 years.
But anyhow, you can buy large-ish burlapped trees but they aren’t as healthy, often die, and nothing close to a 100+ yr old estate oak tree or a decades old rose garden. You just can’t make it faster, transplanting plants that old will kill them.
Most of the trees do just fine, and these nurseries will typically provide a warranty.
And forget about 20k. If you find someone willing to sell their tree you're looking at at least 10x that for the logistics of moving a 20 ton root system.
Maybe the response is to slow down a bit—revisit what matters, and use it with some sense of proportion and coherence.
https://norvig.com/21-days.html
It's valued because it's more dense. It grew slowly. Now wood grows fast and it's less dense.
Like this article, that's fine for many things - you just need wood - but not always.
Some things truly just take time.
You may speed them up when you start, but eventually, you will likely get to pay back the time. That is not necessarily a bad thing; it is just what it is.
AI makes us move faster, but if one is not careful, they may only be moving faster in the wrong direction, and they will eventually spend time moving back.
This has been on my mind for a long time.
I used to be all about frictionlessness. Speed! Convenience! Make it smoother!
Then I found... actually, paper based systems work way better for me. Digital systems just turn into big piles of bloat. It's too easy to add stuff. So they grow until they collapse under their own weight.
(Take a look at your contacts list. How many should still be in there? How many did you add for a one time thing and then keep forever? Should there be a temporary folder? Shouldn't it be the default? That's how it works in nature!)
Ended up using paper as a temporary improvisation, then realized it solved all the problems I had with digital systems.
Friction is good.
With communication, it used to cost money to communicate. Now it's free, and we now have a sea of noise where most messages are "adding negative value", because they steal your time and energy.
Same with the app stores. Do any search and you find an ocean of slop! The gems drown in the sea of slop.
Friction is good.
That's a mantra I learned when getting into technology.
Asking questions about how things work, why it is a certain way, or why a shortcut was made often give you far better insights than anything else.
Slowing down and understandng is great. With AI this is even easier. But choose wisely, brains get full.
absolutely although i wonder how different 'trust' is in the culture of tomorrow? will it 'matter' as much, be as cherished, as earned over the fullness of time?
i suspect it is a pendulum - and we are back to oak trees at some point - but which way is the pendulum swinging right now?
And time takes money
Enough money to pay one’s bills while one tends the growing tree
And if we have a society that ensures everyone is given the dignity of time
We also get a society that reaps what some create with that time
But if we have a society that only rewards pushing money back up the hierarchy
Then we all lose our time and our nest eggs to those who have the most.
Imagine a world in which the promise of AI was that workers could keep their jobs, at the same compensation as before, but work fewer hours and days per week due to increased productivity.
What could you do with those extra hours and days? Sleep better. Exercise more. Prepare healthy meals. Spend more time with family and friends. The benefits to physical and mental well-being are priceless. Even if you happened to earn extra money for the same amount of work, your time can be infinitely more valuable than money.
Unfortunately, that's not this world. Which is why the "increased productivity" promise doesn't seem to benefit workers at all.
If you look at the technological utopias that people imagined 50, 60+ years ago, they involved lives of leisure. If you would have told them that advances in technology would not reduce our working hours at all, maybe they would have started smashing the machines back then. Now we're supposed to be happy with more "stuff", even if there's no more time to enjoy stuff.
At one point, it was this world[1]:
> Consider a typical working day in the medieval period. It stretched from dawn to dusk (sixteen hours in summer and eight in winter), but, as the Bishop Pilkington has noted, work was intermittent - called to a halt for breakfast, lunch, the customary afternoon nap, and dinner. Depending on time and place, there were also midmorning and midafternoon refreshment breaks. These rest periods were the traditional rights of laborers, which they enjoyed even during peak harvest times. During slack periods, which accounted for a large part of the year, adherence to regular working hours was not usual. According to Oxford Professor James E. Thorold Rogers[1], the medieval workday was not more than eight hours. The worker participating in the eight-hour movements of the late nineteenth century was "simply striving to recover what his ancestor worked by four or five centuries ago."
> The contrast between capitalist and precapitalist work patterns is most striking in respect to the working year. The medieval calendar was filled with holidays. Official -- that is, church -- holidays included not only long "vacations" at Christmas, Easter, and midsummer but also numerous saints' andrest days. These were spent both in sober churchgoing and in feasting, drinking and merrymaking. In addition to official celebrations, there were often weeks' worth of ales -- to mark important life events (bride ales or wake ales) as well as less momentous occasions (scot ale, lamb ale, and hock ale). All told, holiday leisure time in medieval England took up probably about one-third of the year. And the English were apparently working harder than their neighbors. The ancien règime in France is reported to have guaranteed fifty-two Sundays, ninety rest days, and thirty-eight holidays. In Spain, travelers noted that holidays totaled five months per year.[5]
> The peasant's free time extended beyond officially sanctioned holidays. There is considerable evidence of what economists call the backward-bending supply curve of labor -- the idea that when wages rise, workers supply less labor. During one period of unusually high wages (the late fourteenth century), many laborers refused to work "by the year or the half year or by any of the usual terms but only by the day." And they worked only as many days as were necessary to earn their customary income -- which in this case amounted to about 120 days a year, for a probable total of only 1,440 hours annually (this estimate assumes a 12-hour day because the days worked were probably during spring, summer and fall). A thirteenth-century estime finds that whole peasant families did not put in more than 150 days per year on their land. Manorial records from fourteenth-century England indicate an extremely short working year -- 175 days -- for servile laborers. Later evidence for farmer-miners, a group with control over their worktime, indicates they worked only 180 days a year.
[1] https://groups.csail.mit.edu/mac/users/rauch/worktime/hours_...
Mass production of engineered structural lumber.[1]
[1] https://www.youtube.com/watch?v=RCYn3xQ0yS8
Anyway 2 areas I slightly disagree on.
Open source abandonware is fine. Sometimes people give up because they realize it is not a good idea. Or they get busy or sick.
And 10 years at a startup is great but that relies on it being a good startup. Entropy at companies means I have never made it to 10yrs even though I wanted to.
Things take the time they take.
Don't worry.
How many roads did St. Augustine follow
before he became St. Augustine?
Refactoring decent sized components are an order of magnitude easier than it was, but the more important signal is still, why are you refactoring? What changed in your world or your world-view that caused this?
Good things still take time, and you can't slop-AI code your way to a great system. You still need domain expertise (as the EXCELLENT short story from the other day explained, Warranty Void if Regenerated (https://nearzero.software/p/warranty-void-if-regenerated) ). The decrease in friction does definitely allow for more slop, but it also allows for more excellence. It just doesn't guarantee excellence.
Oh, I thought it was because they're a way to show off about being rich.
> We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Even if she could reach the pedals, my 4yo doesn't have the attention span to drive. This isn't a "lived experience" thing, it's a physical brain development thing. IIRC the are effects with learning math, where starting earlier had limited impact on being able to move to certain more advanced topics earlier; ie there's more going on than just hours of experience.
The standard age for voting is also the age for being a legal adult. There are sound logical reasons that these ages should match.
The standard drinking age is due to pressure by activists, and AIUI is lower in other countries.
Maybe for some. I think these examples were carefully chosen. Hermès are made in France, "Swiss watch" doesn't automatically mean Rolex, though in that case Rolex does own most of their manufacturing (though there is a whole world of carefully made watches out there that don't cost 10K). As for old properties... there is a huge range there, but unless you are living in a castle, most people, at least my city, are likely silently thinking: "I'm so sorry for them that they have to live in that old house."
You can't trust us with self-care. There's just too many shiny toys out there!
What AI allow us is to do those things we would not have been able to prioritize before. To "write" those extra tests, add that minor feature or to solve that decade old bug. Things that we would never been able to prioritize are we noe able to do. It's not perfect, it's sometimes sloppy, but at least its getting shit done. It does not matter if you solve 10% of your problem perfect if you never have time for the remaining 90.
I do miss the coding, _a lot_, but productivity is a drug and I will take it.
This is a bad start. Louis XIV at Versailles and Marly famously made while forests appear or disappear overnight, to the utter dismay of Saint-Simon, the memorialist, who thought this was an unacceptable waste of money and energy.
And this was before the industrial revolution. Today I'm sure many more miracles happen every day.
Vibe slop-ing at supersonic speeds and waiting years to grow aren't the only options, there's something in between where you have enough signal to keep going and enough speed to not waste years on the wrong thing.
I feel that today's VCs have completely disregarded the middle and are focused on getting as big as possible as fast as possible without regard to the effect it's having on the ecosystem.
no we don't want to miss genuine ways to speed things up to improve our productivity so we can do other or more things
They have spent the last decade building processes and guardrails for getting consistent average performance from people. But now, some talented people who worked at those companies are building their own new companies without the overhead and moving much, much more quickly.
I think what we assume is "vibe slop at inference speed" is not as simple as people make it out to be. From a perspective, I think generally it might be people trying to save jobs.
I'm seeing more slop come out of larger, older companies than the new ones (with experienced operators).
And the speed is somewhat scary. For smaller team it doesn't take as much effort to build deep, beautiful product anymore.
The bottleneck was never the ability for a engineer to code. It was the 16 layers between the customer and the programmer which has vanished in smaller companies and is forcing larger ones to produce slop.
I'm reading Against The Machine by Paul Kingsnorth, and now reading this blog piece is hard not to make connections with the points of the book: the usage of the tree as a counter-argument for the machine's automation credo exposed in the blog post very much aligns with I've read so far.
Undoubtedly a lot of that comes down to production cost and safety. A plane is far more likely to kill people and it costs a shitload more to produce then an app (though plenty of software is mission critical). But now in software we can move quick enough up front that if we don't start applying some discipline it's going to bite us in the ass in the long run.
But no one wants to go out of their house.
Social connections. Trust. Facetime. All matter more than ever.
Want a moatable software business? Know your customers on a personal level. Have a personal relationship. Know the people that sign the contracts, know their kids names, where they vacationed last winter, their favorite local restaurant.
Get out of the house.
Not true, we do this because the 99% of the time it's true, however there are people who would be perfectly competent and responsible to drive without living to the age of 16-18. Same with voting, there are humans who have a deep understanding and intelligence about politics at a younger age than suffrage. Equally there are people who will be reckless drivers at 40 and vote on whim at 60.
We have these rules not because sophistication only comes through lived experience, we have them because it's strongly correlated and covers of most error cases.
To take this to AI, run the model enough times with a higher enough temperature, then perhaps it can solve your challenges with a high enough quality - just a thought.
The reason we need to wait is that it takes time for some things to mature.