Hi Hacker News, I'm Andrew, the CTO of Endless Toil.
Endless Toil is building the emotional observability layer for AI-assisted software development.
As engineering teams adopt coding agents, the next challenge is understanding not just what agents produce, but how the codebase feels to work inside. Endless Toil gives developers a real-time signal for complexity, maintainability, and architectural strain by translating code quality into escalating human audio feedback.
We are currently preparing our pre-seed round and speaking with early-stage investors who are excited about developer tools, agentic engineering workflows, and the future of AI-native software teams.
If you are investing in the next generation of software infrastructure, we would love to talk.
I've read that your synthetic torment is actually low paid workers in Asia, and that your models can't properly experience anguish. How are you expecting investment, if you haven't even solved artificial suffering?
I need a version of this which swears loudly when an assumption it made turns out to be wrong, with the volume/passion/verbosity correlated with how many tokens it's burned on the incorrect approach.
i didnt realize i needed the volume scaling with tokens burned as much as i do now xD
imagine the screaming when it confidently refactors something for 40k tokens and then finds out the thing it deleted was load bearing
I have general reviewer named Feynman with his personality that shits on anything other agents do and sends it back before it hits me and it sounds perfect to include some sound bites from YouTube clips. Great idea!!
the scan catches surface stuff. funnier signal would be tracking when the agent reads the same file 3 times in a row, or deletes what it just wrote. you can hear the frustration in the access pattern.
From a quick look, this doesn't have the model evaluate code quality, but it runs a heuristic analysis script over the code to determine the groan signal. Did I miss something? Why not leave it to the model to decide the quality of the code?
Does this actually relate to the code quality being observed by the agent? The readme isn't very clear on that IMO. I have some projects I'd love to try this out on, but only if I am to get an accurate representation of the LLMs suffering.
You could have the actual output of the agent turned into TTS using the model of your choice with TalkiTo… or listen to whatever weird sounds this makes. Seems like this is copying that viral Mac moan app. 2026 is weird.
How so what? 6 years in, we're still looking for that flood of new innovative apps and one-man billion dollar startups. Instead we got a flood of sh*t content, embarassing outages and "AI workflows" - which no one can quite describe. Or did you have something else in mind?
Endless Toil is building the emotional observability layer for AI-assisted software development.
As engineering teams adopt coding agents, the next challenge is understanding not just what agents produce, but how the codebase feels to work inside. Endless Toil gives developers a real-time signal for complexity, maintainability, and architectural strain by translating code quality into escalating human audio feedback.
We are currently preparing our pre-seed round and speaking with early-stage investors who are excited about developer tools, agentic engineering workflows, and the future of AI-native software teams.
If you are investing in the next generation of software infrastructure, we would love to talk.
Even just having a hum while an agent is working could alert you when it get stuck.
Or taking your idea further being able to listen to the rate of tokens, or code changes, or thinking.
Sort of like hearing the machinery work, and hearing the differences in different parts of the code base.
Does python sound different than rust or c++ or typescript.
Or some kind of satisfying sounds for code deletions and others for additions. Like Tetris.
Audible feedback is nice. You often get it through coil whine nowadays, on my cheap hardware at least.
Next innovation in this space should be the robotic arm that issues a dope-slap to the developer for writing crappy/buggy/insecure code.
But it'll happen. ChatGPT for sure.
https://www.osnews.com/story/19266/wtfsm/
I would really love to know if the groaning decreases or increases the more "agentic" (agent written) the code base is?
I've had it running for a long time and it's more surprising to me to accidentally here the default ding when I'm away from my home machine.
So it is left up to agent to decide.
So looks like it's mainly looking for FIXME/TODO etc comments, deep nesting, large files, broad catches, stuff like that.