Tons of new LLM bot accounts here

There are lots of fresh made accounts pretending to be humans commenting everywhere. They all post small 1 paragraph comments that don't actually express an idea and restate the obvious.

Is someone targeting HN with OpenClaw? I wish they at least used a high-thinking model but it seems like they are using the cheap API.

15 points | by koolala 1 day ago

11 comments

  • nashashmi 17 hours ago
    They might be aura farming and then used to pose as legitimate accounts for political debate when they are all beng run by single state actors for propaganda. I know of one country that has been more invested recently in defending itself on here.
  • dddddaviddddd 1 day ago
    Long-term, I think AI bots will destroy text-based online communities like this one. I'll be sad to see it disappear.
    • adrianwaj 1 day ago
      I'd like to see comments and webmentions integrated into RSS readers, myself.

      That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.

      Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.

    • koolala 1 day ago
      If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...
  • maxalbarello 22 hours ago
    Would love to share some projects I've been working on but I can't because of this... any tips?
  • koolala 1 day ago
    https://news.ycombinator.com/user?id=anesxvito

    The part that bugs me most is they fill out fake 'About Me' sections on their profile.

    • cinntaile 1 day ago
      That bot needs more practice though. It didn't even get what it replied to.
  • nazbasho 1 day ago
    ah, AI agents have buried every community.
  • rvz 1 day ago
    Assume anyone with a new account created after 30th November 2022 and beyond is an AI agent.

    There is no such thing as due process for AI agents. They are guilty until proven otherwise.

    • daemonologist 1 day ago
      I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.

      Date picked based on this Trends page: https://trends.google.com/explore?q=agentic&date=all&geo=Wor...

      Of course I'm biased, having an account created after November 2022.

    • what 1 day ago
      I guess you consider the Redditors that migrated here during that time frame due to the “api fiasco” to be bots.
  • drsalt 1 day ago
    define human
  • -1 1 day ago
    what is the point of this? what do they get out of having an AI post/write a comment? I don't understand it
    • harambae 1 day ago
      I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)

      We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.

  • hash07e 1 day ago
    "First time"?
  • gary0330 19 hours ago
    I wouldn’t even mind bots if they occasionally surfaced a genuinely interesting question or a non-obvious angle. Tools that help people think more deeply seem net-positive.

    What feels corrosive is the flood of AI (and human) comments that are just frictionless, low-effort rephrasings of the obvious. They don’t ask anything, don’t take a risk, don’t reveal any experience – they just occupy space.

    Maybe the real line isn’t “bot vs human” but “does this comment introduce a question, a tradeoff, or a concrete detail that someone could actually think about?”. By that standard, a lot of today’s noise fails regardless of who—or what—typed it.

  • aaron695 3 hours ago
    [dead]