I've made a few attempts at manually doing this w/ mcp and took a brief look at "claude swarm" https://github.com/parruda/claude-swarm - but in the short time I spent on it I wasn't having much success - admittedly I probably went a little too far into the "build an entire org chart of agents" territory
the main problem I have is that the agents just aren't used
For example, I set up a code reviewer agent today and then asked claude to review code, and it went off and did it by itself without using the agent
in one of anthropic's own examples they are specifically telling claude which agents to use which is exactly what I don't want to have to do:
> First use the code-analyzer sub agent to find performance issues, then use the optimizer sub agent to fix them
My working theory is that while Claude has been extensively trained on tool use and is often eager to use whatever tools are available, agents are just different enough that they don't quite fit - maybe asking another agent to do something "feels" very close to asking the user to do something, which is counter to their training
but maybe I just haven't spent enough time trying it out and tweaking the descriptions
Roo code does this really well with their orchestration mode, there’s probably a way to have a claude.md to do this as well. The only issue with roo is it’s “single threaded” but you do get the specific loaded context and rules for a specific task which is really nice.
the same problem with mcp. as well as claude md. most of the time they aren't used when it would be appropriate. what's the point of this agents and standards when you can't make them reliably being used by your model..
People speculate somewhat seriously that Claude (especially given its French name) picked up at some point that you aren't supposed to work as hard in July and August.
To be frank psychiatrists, being MDs, would likely prescribe medication and I’m not sure how that would help. As a licensed psychologist I have ideas on how to debug AI though.
I don’t know about stupider, but definitely less reliable/available
A couple days ago I was getting so many api errors/timeouts I decided to upgrade from the $20 to the $100 plan (as I was also regularly hitting rate limits as well)
It seemed to fix the issue immediately. But today, the errors came back for about half an hour
Insert something to the tune of: “never read files in slices. Instead, whenever accessing a file, you must read a file in entirety[..]” at the beginning of every conversation or whenever you’re down to burn more credits/get better results.
A great deal of claude stupidity is due to context engineering, specifically due to the fact that it tries its hardest to pick out just the slice of code it needs to fulfill the task.
A lot of the annoying “you’re absolute right!” come from CC incrementally discovering that you have more than 10 lines of code in that file that pertains to your task.
I don’t believe conspiracies about dumbed down models. Its all context pruning.
One nice realization I had when using a similar feature in roo:
You don't need a full agent library to write LLM workflows.
Rather: A general purpose agent with a custom addition to the system prompt can be instructed to call other such agents.
(Of course explicitly mamaging everything is the better choice depending on your business case. But i think it would be always cheaper to at least build a prototype using this method.)
I wonder if this is also a good way to create experts for specific tasks/features of a codebase.
For example, a sub-agent for adding a new stat to an RPG. It could know how to integrate with various systems like items, character stats component, metrics, and so on without having to do as much research into the codebase patterns.
I use the .devcontainer¹ from the claude-code repository. It works great with VSC and let's you work in your docker container without any issues. And as long as you use some sort of version control (git) you cannot really lose anything.
I use the .devcontainer¹ from the claude-code repository. It works great with VSC and let's you work in your docker container without any issues. And as long as you use some sort of version control (git) you cannot really lose anything.
the main problem I have is that the agents just aren't used
For example, I set up a code reviewer agent today and then asked claude to review code, and it went off and did it by itself without using the agent
in one of anthropic's own examples they are specifically telling claude which agents to use which is exactly what I don't want to have to do:
> First use the code-analyzer sub agent to find performance issues, then use the optimizer sub agent to fix them
My working theory is that while Claude has been extensively trained on tool use and is often eager to use whatever tools are available, agents are just different enough that they don't quite fit - maybe asking another agent to do something "feels" very close to asking the user to do something, which is counter to their training
but maybe I just haven't spent enough time trying it out and tweaking the descriptions
The model feels like it has got stupid when you get on a cold streak after a hot hand.
(Just to be clear, I have no idea what on this thread to take seriously and not and who is. I'm joking at least.)
A couple days ago I was getting so many api errors/timeouts I decided to upgrade from the $20 to the $100 plan (as I was also regularly hitting rate limits as well)
It seemed to fix the issue immediately. But today, the errors came back for about half an hour
Pretty rare to get a 529 outside of that time window in my personal experience, at least during the USA day.
Hopefully they work out whatever issue is going on.
https://status.anthropic.com/
A great deal of claude stupidity is due to context engineering, specifically due to the fact that it tries its hardest to pick out just the slice of code it needs to fulfill the task.
A lot of the annoying “you’re absolute right!” come from CC incrementally discovering that you have more than 10 lines of code in that file that pertains to your task.
I don’t believe conspiracies about dumbed down models. Its all context pruning.
You don't need a full agent library to write LLM workflows.
Rather: A general purpose agent with a custom addition to the system prompt can be instructed to call other such agents.
(Of course explicitly mamaging everything is the better choice depending on your business case. But i think it would be always cheaper to at least build a prototype using this method.)
For example, a sub-agent for adding a new stat to an RPG. It could know how to integrate with various systems like items, character stats component, metrics, and so on without having to do as much research into the codebase patterns.
¹ https://github.com/ruvnet/claude-flow
> [...]
> # 2. Activate Claude Code with permissions
> claude --dangerously-skip-permissions
Bypassing all permissions and connecting with MCPs, can't wait for "Claude flow deleted all my files and leaked my CI credentials" blog post
I use the .devcontainer¹ from the claude-code repository. It works great with VSC and let's you work in your docker container without any issues. And as long as you use some sort of version control (git) you cannot really lose anything.
¹ https://github.com/anthropics/claude-code/tree/main/.devcont...
I’ve set it up bespoke but the auth flow gets broken.
¹ https://github.com/anthropics/claude-code/tree/main/.devcont...
Bro…