Note: At the point of writing this, the comments are largely skeptical.
Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.
There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.
But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.
This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.
I have the theory that agents will improve a lot when trained on more recent training data. Like I‘ve had agents have context anxiety because they still think an average LLM context window is around 32k tokens. Also building agents with agents, letting them do prompt engineering etc, still is very unsatisfactory, they keep talking about GPT-3.5 or Gemini 1.5 and try to optimize the prompts for those old models, which of course was almost a totally different thing. So I‘m thinking if that‘s how they are thinking of themselves as well, maybe that artificially limits their agentic behavior too, because they just don’t know how much more capable they are than GPT-3.5
If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.
Looks like all bullshit to me.
When you try to make up complex terms to pretend that you are doing engineering but it is baseless.
Something like if I do a list of dev pattern and I say:
- caffeinated break for algorithmic thinking improvement
When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.
Here is one of the first "pattern" of the project I opened for example:
Dogfooding with rapid iteration for agent improvement.
Developing effective AI agents requires understanding real-world usage and quickly identifying areas for improvement. External feedback loops can be slow, and simulated environments may not capture all nuances.
Solution:
The development team extensively uses their own AI agent product ("dogfooding") for their daily software development tasks.
Or
"Extended coherence work sessions"
Early AI agents and models often suffered from a short "coherence window," meaning they could only maintain focus and context for a few minutes before their performance degraded significantly (e.g., losing track of instructions, generating irrelevant output). This limited their utility for complex, multi-stage tasks that require sustained effort over hours.
Solution
Utilize AI models and agent architectures that are specifically designed or have demonstrably improved capabilities to maintain coherence over extended periods (e.g., several hours)
Don't tell me that it is not all bullshit...
I don't say that what is said is not true.
Just imagine you took a 2 pages pamphlet about how to use an LLM and you splitted every sentence into a wannabee "pattern".
I felt the same and I asked Claude about it. The answer made me chuckle:
> There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“
I had someone argue on Twitter recently that they had made an “agent” when all they had really done was use n8n to make a loop that used LLMs and ran on a schedule
People are calling if-then cron tasks “agents” now
Now that you say it, I just realize that it might be useful to me one day if I'm a bland useless startup and I try to dress up my pitch with these terms to try to raise investor money...
Typically awsome-subject-matter repositories link out to other resources.
There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.
I'd rather have a single repo with a curated format and thought behind it (not sure if this is, just assuming), than the normal awesome-* lists that are just linking to every single page on a subject with loads of overlap so I don't even know which one to look at for a given problem.
I didn't have the patience to click through after visiting a few pages only to find the depth lacking.
About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.
I find it interesting that we already have patterns established, while agentic approach is still being adopted in various industries in varying maturity.
This is the real secret sauce right here: "score_7, score_8, score_9, watermark, paid_reward". Adding this to the end of all my prompts has unlocked response quality that I didn't think was possible! /s
Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.
There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.
But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.
This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.
A few years ago we had GitHub resource-spam about smart contracts and Web3 and AWESOME NFT ERC721 HACK ON SOLANA NEXT BIG THING LIST.
Now we have repos for the "Self-Rewriting Meta-Prompt Loop" and "Gas Town":
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.
It is right? “ Do not use Gas Town.”
Star-farming anno 2026.
Something like if I do a list of dev pattern and I say:
- caffeinated break for algorithmic thinking improvement
When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.
Here is one of the first "pattern" of the project I opened for example:
The development team extensively uses their own AI agent product ("dogfooding") for their daily software development tasks.Or
"Extended coherence work sessions"
Don't tell me that it is not all bullshit...I don't say that what is said is not true.
Just imagine you took a 2 pages pamphlet about how to use an LLM and you splitted every sentence into a wannabee "pattern".
> There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“
People are calling if-then cron tasks “agents” now
There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.
https://github.com/nibzard/awesome-agentic-patterns/commits/...
Unfortunately it isn’t possible to detect whether AI was being used in an assistive fashion, or whether it was the primary author.
Regardless, a skim read of the content reveals it to be quite sloppy!
About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
1996: https://web.archive.org/web/19961221024144/http://www.acm.or... > Computer-based agents have gotten attention from computer scientists and human interface designers in recent years