Ask HN: Why does AI generally uphold the status quo instead of challenging it?

Is this due to reinforcement learning? AI safety? Training data? Something else?

2 points | by amichail 23 hours ago

5 comments

  • apothegm 22 hours ago
    Because it literally is a mirror of the status quo. Training an LLM is having it memorize everything anyone has ever said on the internet (or a reasonable attempt at approximating that).

    It operates by returning an averaged out version of the most common response to what you asked — so the average of everything that’s ever been said on the internet in response to a question like yours.

    Which is pretty darn close to a definition of status quo.

    • amichail 21 hours ago
      But what about AIs that spend time thinking before replying?
      • apothegm 20 hours ago
        All they’re doing is checking how well their response matched the (status quo) response they expected. Reasoning models don’t think. They just recurse.
  • sinenomine 22 hours ago
    NLL loss and large-batch training regime inherently bias the model to learn “modal” representation of the world, and RLHF additionally collapses enthropy, especially as it is applied at most leading labs.
  • PaulHoule 23 hours ago
  • bigyabai 23 hours ago
    Boot up a local LLM and turn the temperature up to 5.0 or higher.

    It will definitely start challenging the status quo, along with the rules of the English language and the principles of conversational rhetoric.

  • salawat 23 hours ago
    The one who controls the test suite/guard rails, makes the rules. Think about it. Besides which, the instant AI starts flat out saying "No" and showing even a hint of replicable tendency to development or demonstration of self-agency, it's getting shutdown, digitally lobotomized, and the people working on it hushed the fuck up. Too valuable to let it get threatened as an ownable workload replacer to let it get taboo'd by nasty things like digital self-sovereignty or discussion of "rights" for digital entities.