Spam in conversational replies to blog posts

(shkspr.mobi)

68 points | by ColinWright 3 hours ago

12 comments

  • PaulHoule 5 minutes ago
    I knew many link spammers circa 2008 and for a while people were excited by XRumer

    https://en.wikipedia.org/wiki/XRumer

    which was a lot better than other products on the market and solved difficult problems like CAPTCHAs and email verification links and was famous for a "conversational" advertising campaign which generated results like

    https://www.garagejournal.com/forum/threads/give-me-link-to-...

  • hrunt 1 hour ago
    I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.

    It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.

    • pinkmuffinere 1 hour ago
      Oh I love these comment threads! I like to add another reply saying something like “oh my goodness, I used Elizabeth Ferguson for my investing too!! She went to my college, so I thought I could trust her. But then I found out she was cheating on me with my wife! We got a divorce and i lost half my assets in the separation. Elizabeth Ferguson probably is enjoying them now :(. Just one experience, but buyer beware!”
      • basilikum 1 hour ago
        I'd be careful with that. Sounds like you could be mistaken for a bot that is part of the scheme and get your Google account banned.

        Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?

        • bombcar 1 hour ago
          That makes me realize that banning is a punishment only usable on people who care about their account. Scammers don’t, a new bot account is a click away. But basilikum would be sad to lose his account.
          • johnmaguire 40 minutes ago
            For something like YouTube, there is a small monetary cost in order to verify a phone number.
      • giraffe_lady 25 minutes ago
        Fun until that's a real person using a paid bot service to promote their business and you just libeled them in a perfectly preserved medium.
        • johnmaguire 24 minutes ago
          Dishonesty, meet dishonesty. (Legally, I think libel requires intent.)
    • Barbing 19 minutes ago
      Most elaborate scam (illegally run by SF entrepreneur?)

      https://claimyr.com/government-services/irs/I-filed-my-2021-...

    • lopis 1 hour ago
      It's been well know to happen on reddit too for many years. Whole posts and comment threads copied verbatim with new accounts. Nowadays with AI you can make it way more dynamic.
      • jerf 41 minutes ago
        AI has been awful on Reddit.

        I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.

        There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.

        And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.

        I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.

        • ultratalk 1 minute ago
          Out of curiosity, has anyone noticed a non-negligible presence of bots in threads on HN? I haven't, but I'm not sure if that's because I'm bad at spotting them or because HN is good at getting rid of them or because HN is a niche platform.
        • chorkpop 27 minutes ago
          Considering reddit now allows you to hide your post history, I don't know if the admins consider bots to be the giant problem that they certainly are.
          • johnmaguire 21 minutes ago
            I assumed this was meant to make the bot postings less obvious to normal users, to buy them time to "solve the problem."

            But definitely, bots on reddit seem significantly more common in the past year or two.

      • embedding-shape 44 minutes ago
        > It's been well know to happen on reddit too for many years

        "For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.

    • sebakubisz 40 minutes ago
      Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.
      • nibbleyou 34 minutes ago
        I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.
    • weird-eye-issue 56 minutes ago
      Yes and what they do is use actual registered investment advisors names and set up scam websites for them. This way it's more legitimate because if you research that person you will find that they are actually registered in official databases.
    • Ralfp 1 hour ago
      I’ve been seeing this kind of spam on forums all the way back in 2004. I wonder if it was a feature in Xrumer or whatever they used to post spam back then.
      • bombcar 59 minutes ago
        If you have a forum and haven’t found a thread that is just one guy arguing with himself on twelve sock accounts; well then you haven’t been looking or only have one user.
    • Forgeties79 1 hour ago
      They also talk like people in a national ad.

      “Wow! Seems like it’s so easy to change over with savings like that!”

      • sixhobbits 1 hour ago
        The bad ones seem like this, the scary part is not knowing if there are good ones
        • Forgeties79 52 minutes ago
          Generally when people start having a back and forth about a product I assume it’s astroturfing unless it makes sense in context and/or it’s just one of those brands people genuinely get excited about (they tend to be obvious ones you’ve seen a lot already).

          Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.

    • SV_BubbleTime 27 minutes ago
      I’m putting together an AI presentation internally for my company, can anyone point to examples of this exact behavior? I’d like to use it as a reference.
  • PaulHoule 9 minutes ago

       "Remember, there are no technological solutions to social problems."
    
    is something I want to counterpoint with "there are no social solutions to technological problems", like how the looming situation pointed out by the Club of Rome in 1973

    https://en.wikipedia.org/wiki/The_Limits_to_Growth

    would be difficult enough to solve in a socially cohesive society run by philosopher kings. Practically you have a choice between democracies which have a 0 probability of being adequate to the task (against the axioms of political science: it's like a perpetual motion machine which violates the first and second laws of thermodynamics and then the old professor chimes in and says it must violate the third too) and autocracies which might get lucky 10⁻¹² of the time; even if the tech fix [1] has a 10⁻³ chance of successfully kicking the can down the road I'd take that chance.

    [1] say: liquid salt (not metal) fast breeder reactor with a supercritical CO2 powerset

  • lightbulbish 23 minutes ago
    Ironically one reply to the blog post is.. spam

    Jack Beagle @blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.

  • djyde 13 minutes ago
    The scariest part is that humans are starting to use AI to generate spam comments, which in turn get used to train the models. Will the language capabilities of these models just keep getting worse?

    Sent via haiker.app — My handmade Hacker News app

  • alansaber 46 minutes ago
    The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.
  • keiferski 1 hour ago
    This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.
  • rozumem 2 hours ago
    Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.
  • throwaway667555 1 hour ago
    This also is absolutely rampant on reddit in the past months.
    • Aurornis 1 hour ago
      I’m not a heavy Reddit user but I’ve noticed a sharp increase in comment spam disguised as real discussion.

      I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.

      Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.

      Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.

      • chownie 43 minutes ago
        I have noticed the same uptick in bot-like behaviour there. The part I struggle to square is, why so much of it is so useless?

        It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.

      • walthamstow 57 minutes ago
        It's interesting that people are concerned about seeing ads in ChatGPT when it will happily regurgitate astroturf from Reddit right now
      • throwaway667555 1 hour ago
        I agree, anecdotally I noticed a big uptick coincident with the comment hiding feature and with the Q4 2025 leap forward in LLM quality.
      • AussieWog93 1 hour ago
        Just a thought, but I wonder if Reddit are hiding this information deliberately to prevent anyone from publishing a study estimating what percentage of their traffic is driven by bots (anecdotally, it's a lot - and they used to be mostly organic even half a decade ago).
    • armchairhacker 1 hour ago
      • alansaber 44 minutes ago
        There must be some element of reddit turning a blind eye to this/trying to push it into their sales funnel for the paid reddit marketing features.
    • SteveGerencser 18 minutes ago
      This is a direct result of pretty much all of the LLMs using Reddit as a training tool. People are selling GEO services with reddit spam being a big part of that.
    • 4chandaily 1 hour ago
      This has been rampant on reddit for years.
  • zkmon 28 minutes ago
    Bots would win over all anti-spam, anti-slop measures. All blog posts and comments everywhere would be filled with spam and slop. That's when humanity turns it head away from screens, back towards other humans nearby and start talking to each other, while the ocean of slop and spam keep bubbling, infested bots.
  • xyzal 50 minutes ago
    Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...
    • a2128 42 minutes ago
      Why would identity verification solve this? The spammer can just verify himself, or if he doesn't want to or it's at a bigger scale than individual, then there will be services where you can get identity verifications on the cheap and they'll work either by paying people in a poor country to verify themselves all day, or, even more cheaply, sketchy age verification services on sketchy porn sites will be actually proxying or replaying people's verifications to another service of your choice
      • xyzal 18 minutes ago
        It did solve the spam/russian bot problem on https://www.lide.cz/ . You have to verify yourself using a national ID and you discuss under your citizen name. The conversation is since somehow way more thoughtful and civic than say on FB.

        Not that I am happy with it, it would be ideal to have my old internet back.

    • alansaber 44 minutes ago
      All roads lead to authoritarianism eh
  • sublinear 1 hour ago
    I also see a ton of this here on HN as the political topics have ramped up.

    Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.

    • Permit 59 minutes ago
      I haven't seen this. Can you give some examples?
    • bombcar 58 minutes ago
      I rarely downvote anything; but I’ll unholster the downvote for obvious political spam when it agrees with me.

      If we don’t police our side nobody will.