Show HN: Resonate – real-time high temporal resolution spectral analysis

(alexandrefrancois.org)

67 points | by arjf 18 hours ago

9 comments

  • drmikeando 5 hours ago
    You can view this result as the convolution of the signal with an exponentially decaying sine and cosine.

    That is, `y(t') = integral e^kt x(t' - t) dt`, with k complex and negative real part.

    If you discretize that using simple integration and t' = i dt, t = j dt you get

        y_i = dt sum_j e^(k j dt) x_{i - j}
        y_{i+1} = dt sum_j e^(k j dt) x_{i+1 - j}
                = (dt e^(k dt) sum_j' e^(k j' dt) x_{i - j'}) + x_i 
                = dt e^(k dt) y_i + x_i
    
    If we then scale this by some value, such that A y_i = z_i we can write this as

        z_{i+1} = dt e^(k dt) z_i + A x_i
    
    Here the `dt e^(k dt)` plays a similar role to (1-alpha) and A is similar to P alpha - the difference being that P changes over time, while A is constant.

    We can write `z_i = e^{w dt i} r_i` where w is the imaginary part of k

       e^{w dt (i+1)} r_{i+1} = dt e^(k dt) e^{w dt i} r_i + A x_i
                 r_{i+1} = dt e^((k - w) dt) r_i + e^{-w dt (i+1) } A x_i
                         = (1-alpha) r_i + p_i x_i
    
    Where p_i = e^{-w dt (i+1) } A = e^{-w dt ) p_{i-1} Which is exactly the result from the resonate web-page.

    The neat thing about recognising this as a convolution integral, is that we can use shaping other than exponential decay - we can implement a box filter using only two states, or a triangular filter (this is a bit trickier and takes more states). While they're tricky to derive, they tend to run really quickly.

  • colanderman 14 hours ago
    Nice! I've used a homegrown CQT-based visualizer for a while for audio analysis. It's far superior to the STFT-based view you get from e.g. Audacity, since it is multiresolution, which is a better match to how we actually experience sound. I have for a while wanted to switch my tool to a gammatone-filter-based method [1] but I didn't know how to make it efficient.

    Actually I wonder if this technique can be adapted to use gammatone filters specifically, rather than simple bandpass filters.

    [1] https://en.wikipedia.org/wiki/Gammatone_filter

    • mofeien 11 hours ago
      If you already have the implementation for the CQT, wouldn't you just be able to replace the morlet wavelet used in the CQT by the gammatone wavelet without much of on efficiency hit? I'm just learning about the gammatone filter, and it sounds interesting since it apparently better models human hearing.
  • james_a_craig 17 hours ago
    For some reason the value of Pi given in the C++ code is wrong!

    It's given in the source as 3.14159274101257324219 when the right value to the same number of digits is 3.14159265358979323846. Very weird. I noticed when I went to look at the C++ to see how this algorithm was actually implemented.

    https://github.com/alexandrefrancois/noFFT/blob/main/src/Res... line 31.

    • 2YwaZHXV 17 hours ago
      seems since it's a float it's only 32-bits, and the representation of both 3.14159274101257324219 and 3.14159265358979323846 is the same in IEEE-754: 0x40490fdb

      though I agree that it is odd to see, and not sure I see a reason why they wouldn't use 3.14159265358979323846

      • james_a_craig 17 hours ago
        Yeah, it’s as if they wrote a program to calculate pi in a float and saved the output. Very strange choice given how many places the value of pi can be found.
        • arjf 17 hours ago
          Indeed... I honestly don't remember where or how I sourced the value, and why I did not use the "correct" one - I will correct in the next release of the package. Thanks for pointing it out!
    • pvg 17 hours ago
      That is a very 'childhood exposure to 8 digit calculators' thing to notice.
      • james_a_craig 17 hours ago
        Childhood exposure to pi generation algorithms; the correct version above was from memory.
        • pvg 17 hours ago
          Close enough! The wrong 7 jumped out at me instantly although I didn't remember more than a few after.
  • dr_dshiv 14 hours ago
    Thanks for your contribution! Reminds me of Helmholtz resonators.

    I wrote this cross-disciplinary paper about resonance a few years ago. You may find it useful or at least interesting.

    https://www.frontiersin.org/journals/neurorobotics/articles/...

  • zevv 15 hours ago
    I might be mistaking, but I don't see how this is novel. As far as I know, this has a proven DSP technique for ages, although it it usually only applied when a small amount of distinct frequencies need to be detected - for example DTMF.

    When the number of frequencies/bins grows, it is computationally much cheaper to use the well known FFT algorithm instead, at the price of needing to handle input data by blocks instead of "streaming".

    • colanderman 14 hours ago
      The difference from FFT is this is a multiresolution technique, like the constant-Q transform. And, unlike CQT (which is noncausal), this provides a better match to the actual behavior of our ears (by being causal). It's also "fast" in the sense of FFT (which CQT is not).
      • zipy124 10 hours ago
        There exists the multiresolution FFT, and other forms of FFT which are based around sliding windows/SFFT techniques. CQT can also be implemented extremely quickly, utilising FFT's and kernels or other methods, like in the librosa library (dubbed pseudo-CQT).

        I'm also not sure how this is causal? It has a weighted-time window (biasing the more recent sound), which is farily novel, but I wouldn't call that causal.

        This is not to say I don't think this is cool, it certainly looks better than existing techniques like synchrosqueezing for pushing the limit of the heisenberg uncertainty principle (technically given ideal conditions synchrosqueezing can outperform the principle, but only a specific subset of signals).

  • phkahler 16 hours ago
    This is very much like doing a Fourier Transform without using recursion and the butterflies to reduce the computation. It would be even closer to that if a "moving average" of the right length was used instead of an IIR low-pass filter. This is something I've considered superior for decades but it does take a lot more computation. I guess we're there now ;-)
  • vessenes 11 hours ago
    Nice! Can any signals/AI folks comment on whether using this would improve vocoder outputs? The visuals look much higher res, which makes me think a vocoder using them would have more nuance. But, I'm a hobbyist.
  • waffletower 15 hours ago
    Curious if there is available math to show the gain scale properties of this technique across the spectrum -- in other words its frequency response. The system doesn't appear to be LTI so I don't believe we can utilize the Z-transform to do this. Phase response would also be important as well.
  • Mn7cB_3kL 14 hours ago
    [flagged]
    • zipy124 10 hours ago
      Is this an AI comment? Where in this work does it talk about 3D anything?