The first 40 months of the AI era

(lzon.ca)

75 points | by jpmitchell 4 hours ago

10 comments

  • OrangePilled 1 minute ago
    This is a sound personal assessment.

    The section about being "glazed" into action resonates. Hidden within this concept I think is something profound about human motivation, innuendo and all.

    > AI generated prose is at best boring, and at worst genuinely unappealing. I’m continually tempted, because in theory it should work well. The AI has perfect spelling and grammar, has more than enough context to produce article-length content, and can do in seconds what takes me hours.

    I have a thesis in mind...that there is something fundamental to the human spirit that relishes a sort of friction that LLMs cannot observe or reproduce on their own.

  • tkgally 8 minutes ago
    Nice observation about AI-generated content:

    > I’ve had the idea that from a social perspective it’d be regarded like plastic surgery, in that it only looks weird when its over-done, or done badly.

  • aidos 39 minutes ago
    > To what degree did I expand scope because I knew I could do more using the AI?

    Someone at work recently termed this “Claude Creep”. It’s so easy to generate things push you towards going further but the reality is that’s you’re setting yourself up for more and more work to get them over the line.

    • ares623 38 minutes ago
      And just like that, a new term has been coined.
  • H8crilA 36 minutes ago
    Do you regularly find text content that you know is AI written (but is not marked as such)? Because honestly I don't, and it must exist in decent quantity by now. Or perhaps it's still sparse?
    • bonoboTP 0 minutes ago
      Yes, often, and often here on HN or Substack if I point it out, it doesn't lead to anything good. Many don't recognize it, many do, the author gets defensive etc.

      This article doesn't have the tells, it looks human written.

    • jeffreyrogers 9 minutes ago
      Yes, here, reddit, X, at work in people's emails and status reports.
    • etherus 25 minutes ago
      Have a look here [1] and here [2] - I think they are good resources, but fallible in the long run. I think yes, I do, often confirmed by communication with people I know (i.e. i suspect they have used AI to make something -> I ask). This falls victim to confirmation bias, though. I suspect a nontrivial amount of writing I read is AI generated without me realising, and I'm wary also of falsely flagging AI-generated content that is actually from humans.

      [1] https://en.wikipedia.org/wiki/Wikipedia%3AAI_or_not_quiz [2] https://en.wikipedia.org/wiki/Wikipedia%3ASigns_of_AI_writin...

      • H8crilA 8 minutes ago
        Okay, but the answers in [1] look something like:

        AI generated. Some of the clues include:

        - Most obviously, a failed ISBN checksum

        - Other source-to-text integrity issues; for example, the WWF source says very little about Malaysia specifically, only mentions Sunda tigers (Panthera tigris sondaica), and does not mention tapirs at all

        - Very short yet consistent paragraph length

        - Generic "see also" links, one of which is redlinked

        This is not the sort of thing that I pay attention to unless I'm doing detailed research. And even then I'd probably have a bot check these for me, ironically, since it's such a mechanical job. At the very least detecting AI like this requires conscious effort.

    • insin 13 minutes ago
      Literally every day from green accounts on Hacker News, and in many, many TFAs.
    • htnthrow11220 26 minutes ago
      I see it all the time in basically every form of text communication. What makes you think you are not seeing it?
  • insin 16 minutes ago
    *LLM
  • sudo_man 54 minutes ago
    Bro but... you now are having a business is planned by a paid chatbot, they can shutdown anytime or make it more expensive, also it is imposiable to get something new, you are copying for somewhere else, maybe what claude is copying is having a copyrights on it, like a leaked code and etc, also your brain will slowly shutdown from thinking about 'business' so you will hevaly relays on claude in the future :)

    My friend is trying to do the same, the Docker stack he made for his SaaS is really amazing, it is following the standards from the ancient age.

    • fnord77 47 minutes ago
      > you now are having a business is planned by a paid chatbot, they can shutdown anytime or make it more expensive

      Local models are about 25 months behind the current SOTA. If that holds, businesses won't need the paid models for many things.

  • Adam_cipher 2 hours ago
    [dead]
  • twentyprsaday 42 minutes ago
    [dead]
  • ivanjermakov 17 minutes ago
    > 40 months

    Not counting from 1971s DARPA? Sorry I'm allegric when LLMs being called AI like nothing existed before it.

    • KellyCriterion 10 minutes ago
      Could the "LLM" of 1971 DARPA produce working code that it translated from a legacy codebase to Java and this within a short timeframe? ;-)
    • dr_dshiv 10 minutes ago
      Doesn’t it all look like child’s play though?