Investigating Split Locks on x86-64

(chipsandcheese.com)

46 points | by ingve 3 days ago

2 comments

  • anematode 4 hours ago
    Cool investigation. This part perplexes me, though:

    > Games have apparently been using split locks for quite a while, and have not created issues even on AMD’s Zen 2 and Zen 5.

    For the life of me I don't understand why you'd ever want to do an atomic operation that's not naturally aligned, let alone one split across cache lines....

    • toast0 3 hours ago
      > For the life of me I don't understand why you'd ever want to do an atomic operation that's not naturally aligned, let alone one split across cache lines....

      I assume they force packed their structure and it's poorly aligned, but x86 doesn't fault on unaligned access and Windows doesn't detect and punish split locks, so while you probably would get better performance with proper alignment, it might not be a meaningful improvement on the majority of the machines running the program.

      • anematode 3 hours ago
        Ah, that's a great hypothesis. I wonder, then, how it works with x86 emulation on ARM. IIRC, atomic ops on ARM fault if the address isn't naturally aligned... but I guess the runtime could intercept that and handle it slowly.
        • omcnoe 2 hours ago
          ARM macs apparently have some kind of specific handling in place for this when a process is running with x86_64 compatibility, but it’s not publicly documented anywhere that I can see.
        • BobbyTables2 3 hours ago
          An emulated x86 atomic instruction wouldn’t need to use atomic instructions on ARM.
          • dooglius 3 hours ago
            Why not?
            • MBCook 2 hours ago
              They don’t have to match.

              As an example, what about a divide instruction. A machine without an FPU can emulate a machine that has one. It will legitimately have to run hundreds/thousands of instructions to emulate a single divide instruction, it will certainly take longer.

              Thats OK, just means the emulation is slower doing that than something like add that the host has a native instruction for. In ‘emulator time’ you still only ran one instruction. That world is still consistent.

              • anematode 1 hour ago
                ? That's not how Windows on ARM emulation works. It uses dynamic JIT translation from x86 to ARM. When the compiler sees, e.g., lock add [mem], reg presumably it'll emit a ldadd, but that will have different semantics if the operand is misaligned.
  • strstr 1 hour ago
    Split locks are weird. It’s never been obvious to me why you’d want to do them unless you are on a small core count system. When split lock detection rolled out for linux, it massacred perf for some games (which were probably min-maxing single core perf and didn’t care about noisy neighbor effects).

    Frankly, I’m surprised split lock detection is enabled anywhere outside of multi-tenant clouds.