6 comments

  • sync 2 hours ago
    This is essentially a (vibe-coded?) wrapper around PaddleOCR: https://github.com/PaddlePaddle/PaddleOCR

    The "guts" are here: https://github.com/majcheradam/ocrbase/blob/7706ef79493c47e8...

    • M4R5H4LL 12 minutes ago
      Most production software is wrappers around existing libraries. The relevant question is whether this wrapper adds operational or usability value, not whether it reimplements OCR. If there are architectural or reliability concerns, it’d be more useful to call those out directly.
    • Oras 1 hour ago
      Claude is included in the contributors, so the OP didn’t hide it
    • Tiberium 59 minutes ago
      At this point it feels like HN is becoming more like Reddit, most people upvote before actually checking the repo.
  • v3ss0n 3 hours ago
    How this is better over Surya/Marker or kreuzberg https://github.com/kreuzberg-dev/kreuzberg.
  • hersko 4 hours ago
    I have a flow where i extract text from a pdf with pdf-parse and then feed that to an ai for data extraction. If that fails i convert it to a png and send the image for data extraction. This works very well and would presumably be far cheaper as i'm generally sending text to the model instead of relying on images. Isn't just sending the images for ocr significantly more expensive?
    • unrahul 24 minutes ago
      I have seen this flow in what people in some startups call "Agentic OCR", its essentially a control flow that is coded that tries pdf-parse first or a similar non expensive approach, and if it fails a threshold then use screenshot to text extraction.
    • saaaaaam 3 hours ago
      There was an interesting discussion on here a couple of months back about images vs text, driven by this article: https://www.seangoedecke.com/text-tokens-as-image-tokens/

      Discussion is here: https://news.ycombinator.com/item?id=45652952

    • trollbridge 3 hours ago
      I always render an image and OCR that so I don’t get odd problems from invisible text and it also avoids being affected by anything for SEO.
    • mimim1mi 3 hours ago
      By definition, OCR means optical character recognition. It depends on the contents of the PDF what kind of extraction methodology can work. Often some available PDFs are just scans of printed documents or handwritten notes. If machine readable text is available your approach is great.
  • sgc 3 hours ago
    How does this compare to dots.ocr? I got fantastic results when I tested dots.

    https://github.com/rednote-hilab/dots.ocr

    • mjrpes 3 hours ago
      Ocrbase is CUDA only while dots.ocr uses vLLM, so should support ROCm/AMD cards?
  • constantinum 2 hours ago
    What matters most is how well OCR and structured data extraction tools handle documents with high variation at production scale. In real workflows like accounting, every invoice, purchase order, or contract can look different. The extraction system must still work reliably across these variations with minimal ongoing tweaks.

    Equally important is how easily you can build a human-in-the-loop review layer on top of the tool. This is needed not only to improve accuracy, but also for compliance—especially in regulated industries like insurance.

    Other tools in this space:

    LLMWhisperer/Unstract(AGPL)

    Reducto

    Extend Ai

    LLamaparse

    Docling

  • mechazawa 4 hours ago
    Is only bun supported or also regular node?