So, to dodge this, I created a detailed prompt asking claude sonnet 4 to do this for me. I gave it very clear instructions (~2 min writeup) + the plaintext terraform plan. It successfully generated all the terraform state mv commands I needed in about 1-2 minutes, which I could xargs and boom, done! High value LLM query! Low amount of input tokens (relatively speaking), low amount of output tokens, saving magnitudes of time (which, as we all know, is money).
Please share your high-value LLM queries!
I also upload some common templates for things like my weekly reviews and tell it to use a template if applicable.
I'm sure if I drew diagrams and told it it could use Mermaid, it'd do a good job too. Would like to try when I get the chance.
It saves _so_ much time getting written notes into text. Writing things out helps me plan, but I much prefer to have content digital for syncing, backups, and searching.
This is all in a Claude project for reuse, but I've found most LLMs do a solid job, even the cheap ones like Gemini 2.5 flash (or whatever the low cost current Gemini model is).
it one-shotted 600 lines of code which did the job perfectly. it understood from context the center of body, how to calculate the body normal, to rotate each point around that, all while handling edge cases to avoid errors. would've takens me hours if not days to tweak it manually to work.
I merely selected entire json file, two word prompt: "generate schema"
one-shotted 600 lines json schema, unsolicited 200 lines typescript schema, 150 lines python dataclass model and a README!!! completely unsolicited!
(cursor agent mode)
Last time I did that from an Open API file rather than transform it, as I was lazy, it hallucinated a bunch of properties and left a load off too.
That was a year or so ago.
Any time that I’m trying to think through something or want an “opinion” about design choices or am I misting something, I type those two words in so it will be critical.
My next favorite prompt, “I’m having an issue with $X and having a hard time tracking it down. Help me work backwards. Don’t assume anything. Ask me clarifying questions as needed”. It’s great for rubber ducking.
For AWS troubleshooting, I ask if to give me AWS CLI commands to help it help me to debug and to always append “ | pbcopy” to it so I can just paste the output.
"In 4 sentences, how would you do x".
"In 2 paragraphs summerise the pros and cons of y".
Not really specific coding tasks, but ask these types of questions a lot because often I'm not trying to be an expert or deeply understanding something but get a feel for the consensus view.
LLMs tend to be verbose by default.
In terms of coding I often ask, "Don't make changes, but how would you improve this piece of code?" Or "Don't make changes, but what's wrong with this test?".
I find Cursor at least loves to make changes when I didn't really want it to. I was just asking for some thoughts / opinions.