A case study in PDF forensics: The Epstein PDFs

(pdfa.org)

112 points | by DuffJohnson 2 hours ago

10 comments

  • ted_bunny 34 minutes ago
    Has anyone analysed JE's writing style and looked for matches in archived 4chan posts or content from similar platforms? Same with Ghislaine, there should be enough data to identify them atp right? I don't buy the MaxwellHill claims for various reasons but it doesn't mean there's nothing to find.
    • Der_Einzige 2 minutes ago
      Stylometry is extremely sophisticated even with simple n-gram analysis. There's a demo of this that can easily pick out who you are on HN just based on a few paragraphs of your own writing, based on N-gram analysis.

      https://news.ycombinator.com/item?id=33755016

      You can also unironically spot most types of AI writing this way. The approaches based on training another transformer to spot "AI generated" content are wrong.

    • kmeisthax 27 minutes ago
      I'm pretty sure Epstein tried to meet with moot at least once: https://www.jmail.world/search?q=chris+poole
      • acessoproibido 4 minutes ago
        That is a crazy amount of emails from/about moot...
      • nubg 5 minutes ago
        He met with moot ("he is sensitive, be gentile", search on jmail), and within a few days the /pol/ board got created, starting a culture war in the US, leading to Trump getting elected president. Absolutely nuts.
        • acessoproibido 2 minutes ago
          I always wondered how much of a cultural etc influence 4Chan actually had (has?) - so much of the mindset and vernacular that was popular there 10+ years ago is now completely mainstream.
        • GaryBluto 2 minutes ago
          /pol/ in no way started the American culture war. It was brewing for a while.
  • waynenilsen 1 hour ago
    > Information leakage may also be occurring via PDF comments or orphaned objects inside compressed object streams, as I discovered above.

    hopefully someone is independently archiving all documents

    my understanding is that some are being removed

    • some_random 1 hour ago
      Are they being removed or replaced with more heavily redacted documents? There were definitely some victim names that slipped through the cracks that have since been redacted.
    • embedding-shape 1 hour ago
      Initially under "Epstein Files Transparency Act (H.R.4405)" on https://www.justice.gov/epstein/doj-disclosures, all datasets had .zip links. I first saw that page when all but dataset 11 (or 10) had a .zip link. At one point this morning, all the .zip links were removed, now it seems like most are back again.
    • littlecorner 53 minutes ago
      I think some of the released documents included images of victims, which where redacted. So it's not necessarily malicious removals
      • dylan604 11 minutes ago
        That's my understanding too, so archiving the unredacted images could mean holding CSAM.
  • originalvichy 1 hour ago
    Any guesses why some of the newest files seem to have random ”=” characters in the text? My first thought was OCR, but it seemed to not be linked to characters like ”E” that could be mistakenly interpreted by an OCR tool. My second guess is just making it more difficult to produce reliable text searches, but probably 90% of HN readers could find a way to make a search tool that does not fall apart in case a ”=” character is found (although making this work for long search queries would make the search slower).
    • torh 1 hour ago
      Was on the frontpage yesterday: https://news.ycombinator.com/item?id=46868759
    • ripe 1 hour ago
      The equal characters are due to poor handling of quoted-printable in email.

      The author of gnus, Lars Ingebrigtsen, wrote a blog post explaining this. His post was on the HN front page today.

      • originalvichy 55 minutes ago
        He explained the newline thing that confused me. Good read!
  • embedding-shape 1 hour ago
    Re the OCR, I'm currently running allenai/olmocr-2-7b against all the PDFs with text in them, comparing with the OCR DOJ provided, and a lot it doesn't match, and surprisingly olmocr-2-7b is quite good at this. However, after extracing the pages from the PDFs, I'm currently sitting on ~500K images to OCR, so this is currently taking quite a while to run through.
    • originalvichy 1 hour ago
      Did you take any steps to decrease the dimension size of images, if this increases the performance? I have not tried this as I have not peformed an OCR task like this with an LLM. I would be interested to know at what size the vlm cannot make out the details in text reliably.
      • embedding-shape 1 hour ago
        The performance is OK, takes a couple of seconds at most on my GPU, just the amount of documents to get through that takes time, even with parallelism. The dimension seems fine as it is, as far as I can tell.
    • helterskelter 1 hour ago
      [flagged]
      • embedding-shape 1 hour ago
        Haven't seen anything particular about that, but lots of the documents with names that were half-redacted contain OCRd text that is completely garbled, but olmocr-2-7b seems to handle it just fine. Unsure if they just had sucky processes or if there is something else going on.
        • helterskelter 1 hour ago
          Might be a good fit for uploading a git repo and crowdsourcing
          • embedding-shape 15 minutes ago
            Was my first impulse too but not sure I trust that unless I could gather a bunch of people I trust, which would mean I'd no longer be anonymous. Kind of a catch22.
  • _def 46 minutes ago
    I can't even download the archive, the transmission always terminates just before its finished. Spooky.
  • bugeats 1 hour ago
    Somebody ought to train an LLM exclusively on this text, just for funsies.
    • pc86 55 minutes ago
      DeepSeek-V4-JEE
  • corygarms 1 hour ago
    These folks must really have their hands full with the 3M+ pages that were recently released. Hoping for an update once they expand this work to those new files.
  • nkozyra 1 hour ago
    > DoJ explicitly avoids JPEG images in the PDFs probably because they appreciate that JPEGs often contain identifiable information, such as EXIF, IPTC, or XMP metadata

    Maybe I'm underestimating the issue at full, but isn't this a very lightweight problem to solve? Is converting the images to lower DPI formats/versions really any easier than just stripping the metadata? Surely the DOJ and similar justice agencies have been aware of and doing this for decades at this point, right?

    • originalvichy 1 hour ago
      Maybe they know more than we do. It may be possible to tamper with files at a deeper level. I wonder if it is also possible to use some sort of tampered compression algorithm that could mark images much like printers do with paper.

      Another guess is that perhaps the step is a part of a multi-step sanitation process, and the last step(s) perform the bitmap operation.

      • normalaccess 53 minutes ago
        I'm not sure about computer image generation but you can (relatively) easily fingerprint images generated by digital cameras due to sensor defects. I'll bet there is a similar problem with PC image generation where even without the EXIF data there is probably still too much side channel data leakage.
  • meidan_y 2 hours ago
    (2025) just follow hn guideline, impressive voter ring though
    • alain94040 2 hours ago
      We're in early February 2025 [edit:2026] and the article was written on Dec 23, 2025, which makes it less than two months old. I think it's ok not to include a year in the submission title in that case.

      I personally understand a year in the submission as a warning that the article may not be up to date.

      • petepete 2 hours ago
        We're in Feb 2026.

        I'm not used to typing it yet, either.

      • embedding-shape 1 hour ago
        Less about the age, and more about confusing what they are analyzing, for the files that were just released like a week ago.
      • michaelmcdonald 2 hours ago
        "We're in early February ~2025~ *2026*"
      • GlitchRider47 2 hours ago
        Generally, I'd agree with you. However, the recent Epstein file dump was in 2026, not 2025, so I would say it is relevant in this case..
  • tibbon 2 hours ago
    That's a lot of PeDoFiles!

    (But seriously, great work here!)