When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.
If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.
RMSNorm is pretty insigificant in terms of the overall compute in a transformer though -- usually the reduction work can be fused with earlier or later operations.
Rmsnorm acts like a barrier. No compute on the next network layer can start before all compute in the previous layer is done.
Splitting networks across multiple GPU's, this means you must wait for the slowest node and the longest latency.
As soon as you can remove most of these barriers, compute over non-latency-guaranteed networks becomes more practical, as does non-homogeneous compute (ie. Mixing different GPU models).
Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome
Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.
Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.
I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.
Exactly, And that's what happens in LayerNorm too. So if figured the best base for comparison would have been to leave that bit out when looking at their difference or similarity, because obviously the bits that have the same implementation will be the same.
Proper initialization of layers keeps gradient magnitudes from vanishing/exploding in deep networks. If you make sure the output of each layer has mean 0, std 1, the gradients will be reasonable as well, for example.
I recommend e.g. the og resnet paper and its follow-up from Kaiming He et al.
There essentially the point is that largest eigenvalue (spectral radius) needs to be around 1, meaning repeated applications of a linear transformation doesn’t cause increase or decrease of the activations.
Good question. That was an issue with tanh as activation function, and before residual connections and normalization layers. Tanh as a normalization but with other activations and residual present apparently is ok.
Batch norm and others are important for faster convergence due to forcing the model to focus creating second and higher order nonlinearities, as a simple shift in mean/std is normalized out, and thus the gradient does not point in a direction that would only change those properties of the output distribution.
By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
Splitting networks across multiple GPU's, this means you must wait for the slowest node and the longest latency.
As soon as you can remove most of these barriers, compute over non-latency-guaranteed networks becomes more practical, as does non-homogeneous compute (ie. Mixing different GPU models).
Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.
I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.
I recommend e.g. the og resnet paper and its follow-up from Kaiming He et al.
For a modern take on RNNs, read https://arxiv.org/abs/2303.06349 by DeepMind.
There essentially the point is that largest eigenvalue (spectral radius) needs to be around 1, meaning repeated applications of a linear transformation doesn’t cause increase or decrease of the activations.
Batch norm and others are important for faster convergence due to forcing the model to focus creating second and higher order nonlinearities, as a simple shift in mean/std is normalized out, and thus the gradient does not point in a direction that would only change those properties of the output distribution.
We at Traceoid http://traceoid.ai have identified a promising approach for scaling EBMs. Join the discord channel https://discord.com/invite/mr9TAhpyBW