12 comments

  • alexlitz 49 minutes ago
    I made a blogpost on my submission (currently the top handwritten one at 36 parameters) https://alexlitzenberger.com/blog/building_a_minimal_transfo...
    • sowbug 21 minutes ago
      I ask this question as someone who can't do much more than confirm that your blog post is written in English by someone who knows math.

      Does this result suggest that if we had N clever humans manually building an LLM, they might come up with something as smart as a frontier model, but potentially 45 times smaller? (1644 / 36 ~= 45, N = very large, time not specified)

      • alexlitz 19 minutes ago
        I imagine getting things to be polysemantic in a way that does not interfere would lead to sublinear scaling. Also there are smaller ones that were trained so would still be more like 311/36 ~= 8.6.
        • sowbug 12 minutes ago
          Thanks!

          (I see the Trained Weights results now, thanks.)

  • delta_p_delta_x 23 minutes ago
    Very cool, but can I suggest the `add` CPU instruction instead? Supports 64-bit numbers, and it's encoded in hardware, and no need to cross a PCIe interface into a beefy, power-hungry GPU and back again. And chances are it's cross-platform, because basically every ISA since the very first has had `add`.
  • amelius 3 hours ago
    > In short: if you can swap in a different set of weights and use the exact same inference code for a different task, your setup is legitimate. If the inference code is inseparable from the algorithm, it's not.

    I wonder why they don't just write the code themselves, so by design the focus can be on the model.

  • E-Reverance 2 hours ago
    Not sure how much this fits into the rules but I saw on twitter someone claimed 28 params : https://gist.github.com/SeuperHakkerJa/da3050739bea97aabd86e...
  • medi8r 2 hours ago
    You can do that in a single matmul of course.
    • hyperhello 2 hours ago
      So can you take an arbitrary transformer and somehow turn it into a compact set of low-power fast gates by some algorithm?
      • measurablefunc 2 hours ago
        I think you're misunderstanding the joke.
        • medi8r 1 hour ago
          Yes joke is:

              [A B]
          
          times

              [1]
              [1]
          
          is

              [A+B]
          • hyperhello 1 hour ago
            From context then, I infer that a transformer is not comprised of matrix multiplications, because it would simply be one that adds two 10-digit numbers.
            • medi8r 1 hour ago
              A transformer tokenizes input, does a bunch of matmul and relu set up in a certain way. It doesn't get to see the raw number (just like you don't when you look at 1+1 you need visual cortex etc. first.)
              • Lerc 2 minutes ago
                Notably the difference is that ten digits is not the same thing as a number. One might say that turning it into a number might be the first step, but Neural nets being what they are, they are liable to produce the correct result without bothering to have a representation any more pure than a list of digits.

                I guess the analogy there is that a 74ls283 never really has a number either and just manipulates a series of logic levels.

  • i000 1 hour ago
    Would it make sense to embed such single-purpose network with fixed weights within a LLM before pre-training?
  • Sophira 42 minutes ago
    I get that this is technically interesting, for certain, but the sheer amount of energy and associated global warming risk needed to do something with >=99% accuracy that we've been able to do easily for decades with a guaranteed 100% accuracy seems to me to be wasteful to the extreme.
    • Lerc 12 minutes ago
      What would be an acceptable amount of energy to spend on something that someone has done in a different manner before? Would you rather we stick with all of the current known ways to do things.

      Does this boil down to a condemnation of all scientific endeavours if they use resources?

      Would it change things if the people who did it enjoyed themselves? Would they have spent more energy playing a first person shooter to get the same degree of enjoyment?

      How do you make the calculation of the worth of a human endeavour? Perhaps the greater question is why are you making a calculation of the worth of a human endeavour.

    • coolsunglasses 39 minutes ago
      >Hacker News

      not any more, eh?

    • thereisnospork 39 minutes ago
      You need to recalibrate your sense of scale if you think that this is a geologically relevant usage of energy.
    • nradov 31 minutes ago
      Wait until the see the quantum computer that it takes to factor the integer 15.
  • ks2048 2 hours ago
    So, hand-coded weights can do it with 36 params and 311 for trained weights - did anyone try the former architecture, but starting with random weights and learning?
    • alexlitz 44 minutes ago
      For one the specific 36 parameter version is impossible without float64 so you might guess the corollary that it is not exactly amenable to being found by gradient descent. I think the question of how you can structure transformers and neural nets in general so that they can both very parsimoniously represent things like this and have it be amenible to learning by gradient descent.
    • bitwize 21 minutes ago
      "Minksy, why did you close your eyes?"

      "So that the room will be empty."

  • 1over137 1 hour ago
    Now wrap it all in an Electron app!
  • munro 1 hour ago
    >=99% accuracy wtf?!?

    I was initially excited until i saw that, because it would reveal some sort of required local min capacity, and then further revelation that this was all vibe coded and no arXiv, makes me feel I should save my attn for another article.

  • MarcLore 1 hour ago
    The gap between 36 hand-coded params and 311 trained params is fascinating and honestly underappreciated. It mirrors something we see repeatedly in ML: gradient descent finds solutions in a fundamentally different region of parameter space than a human engineer would design.

    When you hand-code the weights, you're essentially implementing a known algorithm (carry-propagation) directly into the network topology. But trained networks often discover distributed representations that spread the computation across more parameters in ways that are harder to interpret but more robust to input distribution shifts.

    I'd be curious whether the 311-param trained model generalizes better to bases other than 10, or to addition with different digit counts than it was trained on. In my experience, the 'messier' learned solutions sometimes capture more structural regularity than the clean engineered ones, precisely because they aren't locked into a single algorithmic strategy.

  • jaunt7632 1 hour ago
    [dead]