Discourse Is Not Going Closed Source

(blog.discourse.org)

63 points | by sams99 2 hours ago

6 comments

  • dhruv3006 1 hour ago
    > Open source creates a useful urgency: when your code is public, you assume it will be examined closely, so you invest earlier and more aggressively in finding and fixing issues before attackers do.

    This should be the mentality of every company doing open source.Great points made.

    • necovek 1 hour ago
      This should be a mentality of every company building products :)
      • dhruv3006 1 hour ago
        I guess open source makes you more accountable.
  • chrismorgan 1 hour ago
    > I want to be fair to Cal.com here, because I don’t think they’re acting in bad faith. I just think the security argument is a convenient frame for decisions that are actually about something else. […] Framing a business decision as a security imperative does a disservice to the open-source ecosystem that helped Cal.com get to where they are.

    That sure sounds like bad faith to me.

    • LoganDark 1 hour ago
      Bad faith requires you to intend it badly, though, not just for it to be bad.
      • chrismorgan 1 hour ago
        Framing a business decision as a security imperative sure sounds like intent to mislead to me.
        • LoganDark 34 minutes ago
          Misdirection is normal business practice. For example, Quadpay/Zipco recently made a change where instead of appraising your credit independently for each of their plans, they calculate a total amount you're allowed to have in flight at any given time, and share that across everything. In their FAQ, there is an entry for "Is my purchasing power going down?" and the answer is some bullshit like "Your purchasing power is unified for a simpler and more streamlined experience bla bla" which doesn't actually answer the question. It's meant to defuse questioners without actually revealing that yes, total purchasing power did go down when they decreased the number of buckets that multiplied their appraisal. You're no longer allowed to pay a larger sum of money over a longer period of time - you get one amount that you're allowed over any term, and that amount of lower than what you could've been approved for before. Regardless of whether that's a good or bad decision (good for people with bad impulse control, for example), they are dishonest about it through lawyerspeak, which is the most standard business practice there is. You could argue that plenty of standard business practices are bad faith but I would say the capitalist idea of private corporations in the first place is bad faith.
      • Gigachad 1 hour ago
        The above statement is claiming it likely is intended as something bad though. A convenient coverup.
        • LoganDark 1 hour ago
          Covering something up is not bad faith. PR firms do it all the time (though plenty more do things in bad faith too). If what you're covering up is an explicitly user-hostile decision then maybe that's bad faith if what you're trying to do is trick people. But if you're just lying for brownie points then that's not always bad faith, just dumb.
          • pseudalopex 1 hour ago
            Hiding something to manipulate public perception is bad faith.
          • saghm 31 minutes ago
            I don't agree with your definition here. Good faith means trying to be correct but potentially not being by accident. Intentionally lying is bad faith and by definition trying to trick people; you know the truth is one thing, but you're saying something else to try to get them to believe it.
            • LoganDark 23 minutes ago
              What I'm saying is that even lying is only bad faith depending on the intent of the lie. That doesn't mean others can't be upset regardless of the lie's intent, but I wouldn't say all lies are bad faith.
              • pseudalopex 10 minutes ago
                > I wouldn't say all lies are bad faith.

                No one said this.

      • croes 1 hour ago
        > dishonest or unacceptable behaviour:

        https://dictionary.cambridge.org/dictionary/english/bad-fait...

        > I just think the security argument is a convenient frame for decisions that are actually about something else.

        That would mean they think it’s bad faith. Claiming to do something because of A but to really do it because of B is dishonest

  • shevy-java 19 minutes ago
    "over a decade ago, the repository has been licensed under GPLv2. And that’s not changing"

    Well - people can continue the GPLv2 fork anyway. So ultimately what Cal.com would do here does not matter; that's the beauty of GPL in general. It is a strict licence. I think GPLv2 was the better decision for the Linux kernel than, say, BSD/MIT.

    > That code is exposed to constant scrutiny from attackers, defenders, researchers, cloud vendors, and maintainers across the globe. It is attacked relentlessly, but it is also hardened relentlessly.

    It is clear that there is a business decision with regards to Cal.com jumping away from discourse, but the claim that open source is automatically better than closed source, when it comes to security, is also strange. Remember xz utils backdoor? Now, people noticed this eventually. Ok. How many placed trojans exist that people are unaware about? Perhaps there are more sophisticated backdoors. Perhaps AI is also used to help disguise them. I don't think that merely because something is open source, means it is automatically good or better with regards to security. Can you trust software? In California there are recent censorship bills to restrict 3D printing further, allegedly to curb on plastic guns (but in reality sponsored by lobbyists from the industry). Can a 3D printer print out a 3D printer that is not restricted? Is the state sniffing after people via laws not also a restriction? I guess it is possible to ensure a clean open hardware and open software system acting in tandem. But you kind of have to show that this is the case. See this old discussion about Trust, on reddit: https://old.reddit.com/r/programming/comments/1m4mwn/a_simpl...

  • chrismorgan 1 hour ago
    > Large parts of it are delivered straight into the user’s browser on every request: JavaScript, …

    Ooh, now I want to try convincing people to return from JS-heavy single-page apps to multi-page apps using normal HTML forms and minimal JS only to enhance what already works without it—in the name of security.

    (C’mon, let a bloke dream.)

    • ironmagma 1 hour ago
      There are a lot of things to hate in the Web3 world. Lack of back button form resubmission or redirect loops is a strange thing to dislike though.
    • kelsey98765431 1 hour ago
      The web has grown so hostile lately that javascript is honestly not safe or useful anymore. the only thing it's used for is serving ads and trackers and paywalls, if i can't read a website with no script enabled it's not meant for me and im just not reading it.
      • bruce511 1 hour ago
        I concur that most web sites could use less JavaScript. And a lot of (but not all) cosmetic uses for JavaScript can be done in CSS.

        Of course for web apps (as distinct from web sites) most of what we do would be impossible without JavaScript. Infinite scrolling, maps (moving and zooming), field validation on entry, asynchronous page updates, web sockets, all require JavaScript.

        Of course JavaScript is abused. But it's clearly safe and useful when used well.

  • LoganDark 1 hour ago
    This article raises a lot of good points that strengthen the argument against keeping models away just because they're "too powerful". I remain disappointed to see AI corporations gloating about how powerful their private models are that they're not going to provide to anyone except a special whitelist. That's more likely to give attackers a way in without any possibility for defense, not the other way around.
    • NitpickLawyer 1 hour ago
      I think the "too powerful" is a convenient half-truth that also helps with marketing, and more importantly keeps the model from being distilled in the short term. They'll release it "to the masses" after KYC or after they already have the next gen for "trusted partners".
      • LoganDark 1 hour ago
        I feel bad for Anthropic because they thought Persona was an acceptable KYC provider. It probably was a genuine mistake. I might have to leave them over that, if they think it's fun to ask me to give Peter Thiel my ID to persist indefinitely on Persona's servers!!!
  • jonahs197 51 minutes ago
    Never used it as it asks me to burn an email address to post.