Ask HN: What's your most valuable query to an LLM?

I'm moving some terraform state after a refactor. A very tedious problem (as some of you may know). About 50 resources to be destroyed, and 50 to be created, it would take me at least 1-2 hours to manually move the state, more if I mess up somewhere along the way and have to redo it.

So, to dodge this, I created a detailed prompt asking claude sonnet 4 to do this for me. I gave it very clear instructions (~2 min writeup) + the plaintext terraform plan. It successfully generated all the terraform state mv commands I needed in about 1-2 minutes, which I could xargs and boom, done! High value LLM query! Low amount of input tokens (relatively speaking), low amount of output tokens, saving magnitudes of time (which, as we all know, is money).

Please share your high-value LLM queries!

12 points | by baalimago 1 day ago

7 comments

  • jjice 13 hours ago
    I like to digitize my handwritten notes. For the basic transcription, there's isn't much prompting to do, but I do tell it some stylistic choices I make and how to interpret those as markdown, like open to-dos as circles and closed to-dos as circles with Xs to their markdown equivalents.

    I also upload some common templates for things like my weekly reviews and tell it to use a template if applicable.

    I'm sure if I drew diagrams and told it it could use Mermaid, it'd do a good job too. Would like to try when I get the chance.

    It saves _so_ much time getting written notes into text. Writing things out helps me plan, but I much prefer to have content digital for syncing, backups, and searching.

    This is all in a Claude project for reuse, but I've found most LLMs do a solid job, even the cheap ones like Gemini 2.5 flash (or whatever the low cost current Gemini model is).

  • tkiolp4 11 hours ago
    The most valuable queries are the ones I know their answer in advance. It’s just that I am too lazy to craft the answer myself. Just like you did. If I were assigned to do your exact same task with terraform (something I don’t have much experience with), I wouldn’t be able to successfully query the LLM to do the job.
  • high_byte 21 hours ago
    was working with mediapipe BlazePose, which gives 33 pose points in world space, but wanted "the pose to always point forward" (virtually this prompt exactly)

    it one-shotted 600 lines of code which did the job perfectly. it understood from context the center of body, how to calculate the body normal, to rotate each point around that, all while handling edge cases to avoid errors. would've takens me hours if not days to tweak it manually to work.

    • high_byte 21 hours ago
      another example just from today:

      I merely selected entire json file, two word prompt: "generate schema"

      one-shotted 600 lines json schema, unsolicited 200 lines typescript schema, 150 lines python dataclass model and a README!!! completely unsolicited!

      (cursor agent mode)

      • mattmanser 17 hours ago
        Have you checked it yet?

        Last time I did that from an Open API file rather than transform it, as I was lazy, it hallucinated a bunch of properties and left a load off too.

        That was a year or so ago.

        • high_byte 5 hours ago
          a lot changed in a year
  • scarface_74 1 day ago
    Two words - “devils advocate”.

    Any time that I’m trying to think through something or want an “opinion” about design choices or am I misting something, I type those two words in so it will be critical.

    My next favorite prompt, “I’m having an issue with $X and having a hard time tracking it down. Help me work backwards. Don’t assume anything. Ask me clarifying questions as needed”. It’s great for rubber ducking.

    For AWS troubleshooting, I ask if to give me AWS CLI commands to help it help me to debug and to always append “ | pbcopy” to it so I can just paste the output.

  • kypro 19 hours ago
    To be honest thing I find most doing most is asking the LLM to keep thing to some set number of sentences/paragraphs.

    "In 4 sentences, how would you do x".

    "In 2 paragraphs summerise the pros and cons of y".

    Not really specific coding tasks, but ask these types of questions a lot because often I'm not trying to be an expert or deeply understanding something but get a feel for the consensus view.

    LLMs tend to be verbose by default.

    In terms of coding I often ask, "Don't make changes, but how would you improve this piece of code?" Or "Don't make changes, but what's wrong with this test?".

    I find Cursor at least loves to make changes when I didn't really want it to. I was just asking for some thoughts / opinions.

    • dv_dt 16 hours ago
      At least two of the VSCode AI plugins I've tried out have an "ask" mode that explicitly does not change anythying
  • Lionga 23 hours ago
    how many r's in strawberry
  • MasterIdiot 1 day ago
    "add tests to"