Six Harmless Bugs Lead to Remote Code Execution

(mehmetince.net)

93 points | by ozirus 3 days ago

5 comments

  • kichik 14 hours ago
    Nice chain and write-up. I don't know that I would call eval() on user input, hard coded secrets, and leaked credentials small or harmless. All of those are scary on their own.
    • arcfour 10 hours ago
      Yeah...and the fact that they evidently had no responsible disclosure process and ghosted the reporter...for a security product?!

      Big yikes.

  • x0x0 10 hours ago
    This writeup is great, particularly the discussion of how Mehmet worked through understanding the system.

    That said, Logpoint sell a siem product w/o a vulnerability intake process and that can't manage to rapidly patch pre-auth RCE security holes. There's nothing to say besides Logpoint are not serious people and nobody should use their nonsense. Given the number of bugs found and the surface level depth, security wasn't even an afterthought; it was not thought about at all.

  • rob_c 2 hours ago
    1) routing (mis-)config problem - key of remote exploit. This should always be something people double check if they don't understand how it works.

    2) hard-coded secrets - this is just against best practice. don't do this _ever_ there's a reason secure enclaves exist, not working it into your workflow is only permissible if you're working with black-box proprietary tools.

    3) hidden user - this is again against best practice allowing for feature creep via permissions creep. If you need privileged hidden remote accessible accounts at least restrict access and log _everything_.

    4) ssrf - bad but should be isolated so is much less of an issue. technically against best practices again, but widely done in production.

    5) use of python eval in production - no, no, no, no, never, _ever_ do this. this is just asking for problems for anything tied to remote agents unless the point of the tool is shell replication.

    6) static aes keys / blindly relying on encryption to indicate trusted origin - see bug2, also don't use encryption as origin verification if the client may do _bad_ things

    parsing that was... well... yeah, I can see why that turned into a mess, the main thing missing is a high-level clear picture of the situation vs a teardown of multiple bugs and a brain dump

  • AlienLang 7 hours ago
    This is a great example of how small assumptions compound. None of these bugs look critical in isolation, but together they completely change the threat model.
    • boston_clone 7 hours ago
      Thanks, ChatGPT.
      • patates 7 hours ago
        For fun (not because I'm trying to defend the capabilities of LLMs, but really just out of boredom), I tried to make this better with this prompt:

        > make this sound less AI-generated and more like a proper hacker news comment with some SUBSTANCE: (insert the OP comment from above)

        and Gemini 3 Flash generated this:

        > This is exactly why CVSS scores can be misleading in practice. You see this constantly at Pwn2Own: it’s rarely one massive architectural failure, but a daisy-chain of three or four "low severity" quirks—an info leak here, a minor logic error there.

        > To a product manager, these look like edge cases or wontfix tickets. But to an attacker, they aren't bugs; they’re primitives. You use the info leak to bypass ASLR, the logic error to groom the heap, and suddenly a nuisance crash becomes full RCE. We tend to model threats based on distinct barriers, ignoring that the complexity of the system guarantees these gadgets will eventually align.

        It feels like it's fun when one plays with it on their own but it's really boring when reading the content others have generated (and therefore I'm sorry for adding to the pile - just wanted to see if the "HN style" was already baked-in to the LLMs and share the result: Yes it is).

        • jacquesm 7 hours ago
          I wished there was a 9th bit that we could use to tag AI generated content with.
          • patates 6 hours ago
            9th bit is the color:

            > I think it's time for computer people to take Colour more seriously

            Source: https://ansuz.sooke.bc.ca/entry/23 , "What Colour are your bits?"

            • jacquesm 5 hours ago
              Yes, that's what I had in mind.
          • josefx 1 hour ago
            There is the evil bit RFC for IPv4
          • amelius 5 hours ago
            Unicode can maybe invent an escape code.
            • jacquesm 5 hours ago
              That is one law I could get behind actually: the absolute requirement to label any and all AI output by using a duplicate of all of Unicode that looks the same and feels the same but is actually binary in a different space.

              And then browsers and text editors could render this according to the user's settings.

              • amelius 5 hours ago
                Yes, it would already help if they started with whitespace and punctuation. That would already give a big clue as to what is AI generated.

                In fact, using a different scheme, we can start now:

                    U+200B — ZERO WIDTH SPACE
                
                Require that any space in AI output is followed by this zero-width character. If this is not acceptable then maybe apply a similar rule to the period character (so the number of "odd" characters is reduced to one per sentence).
                • tgv 4 hours ago
                  Unfortunately, people here know their way around tools to take out the markers. Probably someone will vibe up a browser plugin for it.
                  • patates 4 hours ago
                    I sometimes use AI to fix my English (especially when I'm trying to say something that pushes my grammar skill to the limit) and people like me can use that to inform others about that. Bad actors will always do weird stuff, this is more about people like me who want to be honest, but signing with (generated/edited with AI) is too much noise.
                    • tgv 2 hours ago
                      A little bit of advice: don't copy and paste the LLM's output, but actively read and memorize it (phrase by phrase), and then edit your text. It helps developing your competence. Not a lot, and it takes time, but consciously improving your own text can help.
                      • patates 2 hours ago
                        Thank you for the advice, I'll try next time!
                    • amelius 4 hours ago
                      Yes, and I think the big AI companies will want to have AI-generated data tagged, because otherwise it would spoil their training data in the long run.
                      • jacquesm 3 hours ago
                        I would not be at all surprised if they already watermark their output but just didn't bother to tell us about it.
        • Zephilinox 6 hours ago
          Both those responses sound clearly like AI though
          • patates 5 hours ago
            Totally! And even if it weren't, I'm still for labelling the AI generated content.

            It's just when someone's going to generate something, they should at least give a little more thought to the prompt.