43 comments

  • cluckindan 0 minutes ago
    ”Without further knowledge, the calculator cannot know that a negative number is impossible (in other words, you can't have -5 civilizations, for example).”

    Not true. If there are no negative terms, the equation cannot have negative values.

  • elia_42 0 minutes ago
    Interesting. I like the notation and the histogram that comes out with the output. I also like the practical examples you gave (e.g. the application of the calculator to business and marketing cases). I will try it out with simple estimates in my marketing campaigns.
  • roughly 9 hours ago
    I like this!

    In the grand HN tradition of being triggered by a word in the post and going off on a not-quite-but-basically-totally-tangential rant:

    There’s (at least) three areas here that are footguns with these kinds of calculations:

    1) 95% is usually a lot wider than people think - people take 95% as “I’m pretty sure it’s this,” whereas it’s really closer to “it’d be really surprising if it were not this” - by and large people keep their mental error bars too close.

    2) probability is rarely truly uncorrelated - call this the “Mortgage Derivatives” maxim. In the family example, rent is very likely to be correlated with food costs - so, if rent is high, food costs are also likely to be high. This skews the distribution - modeling with an unweighted uniform distribution will lead to you being surprised at how improbable the actual outcome was.

    3) In general normal distributions are rarer than people think - they tend to require some kind of constraining factor on the values to enforce. We see them a bunch in nature because there tends to be negative feedback loops all over the place, but once you leave the relatively tidy garden of Mother Nature for the chaos of human affairs, normal distributions get pretty abnormal.

    I like this as a tool, and I like the implementation, I’ve just seen a lot of people pick up statistics for the first time and lose a finger.

    • jrowen 4 hours ago
      This jives with my general reaction to the post, which was that the added complexity and difficulty of reasoning about the ranges actually made me feel less confident in the result of their example calculation. I liked the $50 result, you can tack on a plus or minus range but generally feel like you're about breakeven. On the other hand, "95% sure the real balance will fall into the -$60 to +$220 range" feels like it's creating a false sense of having more concrete information when you've really just added compounding uncertainties at every step (if we don't know that each one is definitely 95%, or the true min/max, we're just adding more guesses to be potentially wrong about). That's why I don't like the Drake equation, every step is just compounding wild-ass guesses, is it really producing a useful number?
      • kqr 4 hours ago
        It is producing a useful number. As more truly independent terms are added, error grows with the square root while the point estimation grows linearly. In the aggregate, the error makes up less of the point estimation.

        This is the reason Fermi estimation works. You can test people on it, and almost universally they get more accurate with this method.

        If you got less certain of the result in the example, that's probably a good thing. People are default overconfident with their estimated error bars.

        • pests 2 hours ago
          > People are default overconfident with their estimated error bars.

          You say this but yet roughly in a top level comment mentions people keep their error bars too close.

          • bigfudge 1 hour ago
            They are meaning the same thing. The original comment pointed out that people’s qualitative description and mental model of the 95% interval means they are overconfident… they think 95 means ‘pretty sure I’m right’ rather than ‘it would be surprising to be wrong’
    • btilly 8 hours ago
      I strongly agree with this, and particularly point 1. If you ask people to provide estimated ranges for answers that they are 90% confident in, people on average produce roughly 30% confidence intervals instead. Over 90% of people don't even get to 70% confidence intervals.

      You can test yourself at https://blog.codinghorror.com/how-good-an-estimator-are-you/.

      • Nevermark 6 hours ago
        From link:

        > Heaviest blue whale ever recorded

        I don't think estimation errors regarding things outside of someone's area of familiarity say much.

        You could ask a much "easier"" question from the same topic area and still get terrible answers: "What percentage of blue whales are blue?" Or just "Are blue whales blue?"

        Estimating something often encountered but uncounted seems like a better test. Like how many cars pass in front of my house every day. I could apply arithmetic, soft logic and intuition to that. But that would be a difficult question to grade, given it has no universal answer.

        • kqr 4 hours ago
          I have no familiarity with blue whales but I would guess they're 1--5 times the mass of lorries, which I guess weigh like 10--20 cars which I in turn estimate at 1.2--2 tonnes, so primitively 12--200 tonnes for a normal blue whale. This also aligns with it being at least twice as large as an elephant, something I estimate at 5 tonnes.

          The question asks for the heaviest, which I think cannot be more than three times the normal weight, and probably no less than 1.3. That lands me at 15--600 tonnes using primitive arithmetic. The calculator in OP suggests 40--320.

          The real value is apparently 170, but that doesn't really matter. The process of arriving at an interval that is as wide as necessary but no wider is the point.

          Estimation is a skill that can be trained. It is a generic skill that does not rely on domain knowledge beyond some common sense.

        • yen223 6 hours ago
          I guess people didn't realise they are allowed to, and in fact are expected to, put very wide ranges for things they are not certain about.
    • pertdist 6 hours ago
      I did a project with non-technical stakeholders modeling likely completion dates for a big GANTT chart. Business stakeholders wanted probabilistic task completion times because some of the tasks were new and impractical to quantify with fixed times.

      Stakeholders really liked specifying work times as t_i ~ PERT(min, mode, max) because it mimics their thinking and handles typical real-world asymmetrical distributions.

      [Background: PERT is just a re-parameterized beta distribution that's more user-friendly and intuitive https://rpubs.com/Kraj86186/985700]

      • baq 48 minutes ago
        arguably this is how it should always be done, fixed durations for any tasks are little more than wishful thinking.
    • youainti 9 hours ago
      > I’ve just seen a lot of people pick up statistics for the first time and lose a finger.

      I love this. I've never though of statistics like a power tool or firearm, but the analogy fits really well.

  • NunoSempere 9 hours ago
    I have written similar tools

    - for command line, fermi: https://git.nunosempere.com/NunoSempere/fermi

    - for android, a distribution calculator: https://f-droid.org/en/packages/com.nunosempere.distribution...

    People might also be interested in https://www.squiggle-language.com/, which is a more complex version (or possibly <https://git.nunosempere.com/personal/squiggle.c>, which is a faster but much more verbose version in C)

    • NunoSempere 9 hours ago
      Fermi in particular has the following syntax

      ```

      5M 12M # number of people living in Chicago

      beta 1 200 # fraction of people that have a piano

      30 180 # minutes it takes to tune a piano, including travel time

      / 48 52 # weeks a year that piano tuners work for

      / 5 6 # days a week in which piano tuners work

      / 6 8 # hours a day in which piano tuners work

      / 60 # minutes to an hour

      ```

      multiplication is implied as the default operation, fits are lognormal.

      • NunoSempere 9 hours ago
        Here is a thread with some fun fermi estimates made with that tool: e.g., number of calories NK gets from Russia: https://x.com/NunoSempere/status/1857135650404966456

        900K 1.5M # tonnes of rice per year NK gets from Russia

        * 1K # kg in a tone

        * 1.2K 1.4K # calories per kg of rice

        / 1.9K 2.5K # daily caloric intake

        / 25M 28M # population of NK

        / 365 # years of food this buys

        / 1% # as a percentage

      • kqr 3 hours ago
        Oh, this is very similar to what I have with Precel, less syntax. Thanks for sharing!
    • NunoSempere 8 hours ago
      Another tool in this spirit is <https://carlo.app/>, which allows you to do this kind of calculation on google sheets.
    • notpushkin 6 hours ago
      Would be a nice touch if Squiggle supported the `a~b` syntax :^)
    • antman 8 hours ago
      I tried the unsure calc and the android app and they seem to produce different results?
      • NunoSempere 8 hours ago
        The android app fits lognormals, and 90% rather than 95% confidence intervals. I think they are a more parsimonious distribution for doing these kinds of estimates. One hint might be that, per the central limit theorem, sums of independent variables will tend to normals, which means that products will tend to be lognormals, and for the decompositions quick estimates are most useful, multiplications are more common
  • usgroup 11 minutes ago
    I think the SWI Prolog clpBNR package is the most complete interval arithmetic system. It also supports arbitrary constraints.

    https://github.com/ridgeworks/clpBNR

  • 97-109-107 27 minutes ago
    The histogram is great, nice work;

    I want to ask about adjacent projects - user interface libraries that provide input elements for providing ranges and approximate values. I'm starting my search around https://www.inkandswitch.com/ and https://malleable.systems/catalog/ but I think our collective memory has seen more examples.

  • dmos62 42 minutes ago
    Love it! I too have been toying with reasoning about uncertainty. I took a much less creative approach though and just ran a bunch of geometric brownian motion simulations for my personal finances [0]. My approach has some similarity to yours, though much less general. It displays the (un)certainty over time (using percentile curves), which was my main interest. Also, man, the UI, presentation, explanations: you did a great job, pretty inspiring.

    [0] https://dmos62.github.io/personal-financial-growth-simulator...

  • usgroup 1 hour ago
    Interval/affine arithmetic are alternatives which do not make use of probabilities for this these kinds of calculations.

    https://en.wikipedia.org/wiki/Interval_arithmetic

    I think arbitrary distribution choice is dangerous. You're bound to end up using lots of quantities that are integers, or positive only (for example). "Confidence" will be very difficult to interpret.

    Does it support constraints on solutions? E.g. A = 3~10, B = 4 - A, B > 0

  • kqr 4 hours ago
    I have made a similar tool but for the command line[1] with similar but slightly more ambitious motivation[2].

    I really like that more people are thinking in these terms. Reasoning about sources of variation is a capability not all people are trained in or develop, but it is increasingly important.[3]

    [1]: https://git.sr.ht/~kqr/precel

    [2]: https://entropicthoughts.com/precel-like-excel-for-uncertain...

    [3]: https://entropicthoughts.com/statistical-literacy

  • ttoinou 10 hours ago
    Would be nice to retransform the output into an interval / gaussian distribution

       Note: If you're curious why there is a negative number (-5) in the histogram, that's just an inevitable downside of the simplicity of the Unsure Calculator. Without further knowledge, the calculator cannot know that a negative number is impossible
    
    Drake Equation or equation multiplying probabilities can also be seen in log space, where the uncertainty is on the scale of each probability, and the final probability is the product of exponential of the log probabilities. And we wouldnt have this negative issue
    • hatthew 9 hours ago
      The default example `100 / 4~6` gives the output `17~25`
      • ttoinou 25 minutes ago
        Amazing, thank you !
  • gregschlom 9 hours ago
    The ASCII art (well technically ANSI art) histogram is neat. Cool hack to get something done quickly. I'd have spent 5x the time trying various chart libraries and giving up.
    • smartmic 2 hours ago
      Here [1] is a nice implementation written in Awk. A bit rough around the edges, but could be easily extended.

      [1] https://github.com/stefanhengl/histogram

    • Retr0id 9 hours ago
      On a similar note, I like the crude hand-drawn illustrations a lot. Fits the "napkin" theme.
  • krick 10 hours ago
    It sounds like a gimmick at first, but looks surprisingly useful. I'd surely install it if it was available as an app to use alongside my usual calculator, and while I cannot quite recall a situation when I needed it, it seems very plausible that I'll start finding use cases once I have it bound to some hotkey on my keyboard.
  • thih9 13 hours ago
    Feature request: allow specifying the probability distribution. E.g.: ‘~’: normal, ‘_’: uniform, etc.
    • pyfon 7 hours ago
      Not having this feature is a feature—they mention this.
      • thih9 2 hours ago
        Not really, or at least not permanently; uniform distribution is mentioned in a github changelog, perhaps it’s an upcoming feature:

        > 0.4.0

        > BRAKING: x~y (read: range from x to y) now means "flat distribution from x to y". Every value between x and y is as likely to be emitted.

        > For normal distribution, you can now use x+-d, which puts the mean at x, and the 95% (2 sigma) bounds at distance d from x.

        https://github.com/filiph/unsure/blob/master/CHANGELOG.md#04...

  • OisinMoran 8 hours ago
    This is neat! If you enjoy the write up, you might be interested in the paper “Dissolving the Fermi Paradox” which goes even more on-depth into actually multiplying the probability density functions instead of the common point estimates. It has the somewhat surprising result that we may just be alone.

    https://arxiv.org/abs/1806.02404

    • drewvlaz 7 hours ago
      This was quite a fun read, thanks!
  • Aachen 7 hours ago
    https://qalculate.github.io can do this also for as long as I've used it (only a couple years to be fair). I've got it on my phone, my laptop, even my server with apt install qalc. Super convenient, supports everything from unit conversion to uncertainty tracking

    The histogram is neat, I don't think qalc has that. On the other hand, it took 8 seconds to calculate the default (exceedingly trivial) example. Is that JavaScript, or is the server currently very busy?

  • djoldman 13 hours ago
    I perused the codebase but I'm unfamiliar with dart:

    https://github.com/filiph/unsure/blob/master/lib/src/calcula...

    I assume this is a montecarlo approach? (Not to start a flamewar, at least for us data scientists :) ).

    • kccqzy 13 hours ago
      Yes it is.
      • porridgeraisin 12 hours ago
        Can you explain how? I'm an (aspiring)
        • kccqzy 12 hours ago
          I didn't peruse the source code. I just read the linked article in its entirety and it says

          > The computation is quite slow. In order to stay as flexible as possible, I'm using the Monte Carlo method. Which means the calculator is running about 250K AST-based computations for every calculation you put forth.

          So therefore I conclude Monte Carlo is being used.

        • constantcrying 11 hours ago
          Line 19 to 21 should be the Monte-Carlo sampling algorithm. The implementation is maybe a bit unintuitive but apparently he creates a function from the expression in the calculator, calling that function gives a random value from that function.
        • hawthorns 9 hours ago
          It's dead simple. Here is the simplified version that returns the quantiles for '100 / 2 ~ 4'.

            import numpy as np
            
            def monte_carlo(formula, iterations=100000):
              res = [formula() for _ in range(iterations)]
              return np.percentile(res, [0, 2.5, \*range(10, 100, 10), 
              97.5, 100])
          
            def uncertain_division():
              return 100 / np.random.uniform(2, 4)
          
            monte_carlo(uncertain_division, iterations=100000)
  • marcodiego 9 hours ago
    I put "1 / (-1~1)" and expected something around - to + infinty. It instead gave me -35~35.

    I really don't known how good it is.

    • NunoSempere 8 hours ago
      I'm guessing this is not an error. If you divide 1/normal(0,1), the full distribution would range from -inf to inf, but the 95% output doesn't have to.
      • SamBam 7 hours ago
        I don't quite understand, probably because my math isn't good enough.

        If you're treating -1~1 as a normal distribution, then it's centered on 0. If you're working out the answer using a Monte Carlo simulation, then you're going to be testing out different values from that distribution, right? And aren't you going to be more likely to test values closer to 0? So surely the most likely outputs should be far from 0, right?

        When I look at the histogram it creates, it varies by run, but the most common output seems generally closest to zero (and sometimes is exactly zero). Wouldn't that mean that it's most frequently picking values closest to -1 or 1 denoninator?

        • pyfon 7 hours ago
          Only 1 percent of values would end up being 100+ on a uniform distribution.

          For normal it is higher but maybe not much more so.

          • etbebl 5 hours ago
            OK, but do we necessarily just care about the central 95% range of the output? This calculation has the weird property that values in the tails of the input correspond to values in the middle of the output, and vice versa. If you follow the intuition that the range you specify in the input corresponds to the values you expect to see, the corresponding outputs would really include -inf and inf.

            Now I'm realizing that this doesn't actually work, and even in more typical calculations the input values that produce the central 95% of the output are not necessarily drawn from the 95% CIs of the inputs. Which is fine and makes sense, but this example makes it very obvious how arbitrary it is to just drop the lowermost and uppermost 2.5%s rather than choosing any other 95/5 partition of the probability mass.

          • lswainemoore 5 hours ago
            That may be true, but if you look at the distribution it puts out for this, it definitely smells funny. It looks like a very steep normal distribution, centered at 0 (ish). Seems like it should have two peaks? But maybe those are just getting compressed into one because of resolution of buckets?
  • chacha21 2 hours ago
    Chalk also supports uncertainty : https://chachatelier.fr/chalk/chalk-features.php (combined with arbitrary long numbers and interval arithmetic)
  • pvg 13 hours ago
    Smol Show HN thread a few years ago https://news.ycombinator.com/item?id=22630600
  • danpalmer 3 hours ago
    This is awesome. I used Causal years ago to do something similar, with perhaps slightly more complex modelling, and it was great. Unfortunately the product was targeted at high paying enterprise customers and seems to have pivoted into finance now, I've been looking for something similar ever since. This probably solves at least, err... 40~60% of my needs ;)
  • nritchie 8 hours ago
    Here (https://uncertainty.nist.gov/) is another similar Monte Carlo-style calculator designed by the statisticians at NIST. It is intended for propagating uncertainties in measurements and can handle various different assumed input distributions.
  • your_challenger 3 hours ago
    Very cool. This can also be used for LLM cost estimation. Basically any cost estimation I suppose. I use cloudflare workers a lot and have a few workers running for a variable amount of time. This could be useful to calculate a ball park figure of my infra cost. Thank you!
  • omoikane 9 hours ago
    If I am reading this right, a range is expressed as a distance between the minimum and maximum values, and in the Monte Carlo part a number is generated from a uniform distribution within that range[1].

    But if I just ask the calculator "1~2" (i.e. just a range without any operators), the histogram shows what looks like a normal distribution centered around 1.5[2].

    Shouldn't the histogram be flat if the distribution is uniform?

    [1] https://github.com/filiph/unsure/blob/123712482b7053974cbef9...

    [2] https://filiph.github.io/unsure/#f=1~2

    • hatthew 9 hours ago
      Under the "Limitations" section:

      > Range is always a normal distribution, with the lower number being two standard deviations below the mean, and the upper number two standard deviations above. Nothing fancier is possible, in terms of input probability distributions.

  • kccqzy 13 hours ago
    I actually stumbled upon this a while ago from social media and the web version has a somewhat annoying latency, so I wrote my own version in Python. It uses numpy so it's faster. https://gist.github.com/kccqzy/d3fa7cdb064e03b16acfbefb76645... Thank you filiph for this brilliant idea!
  • NotAnOtter 2 hours ago
    This is super cool.

    It seems to break for ranges including 0 though

    100 / -1~1 = -3550~3500

    I think the most correct answer here is -inf~inf

  • nkron 4 hours ago
    Really cool! On iOS there's a noticeable delay when clicking the buttons and clicking the backspace button quickly zooms the page so it's very hard to use. Would love it in mobile friendly form!
  • constantcrying 11 hours ago
    An alternative approach is using fuzzy-numbers. If evaluated with interval arithmetic you can do very long calculations involving uncertain numbers very fast and with strong mathematical guarantees.

    It would especially outperform the Monte-Carlo approach drastically.

    • sixo 10 hours ago
      This assumes the inputs are uniform distributions, or perhaps normals depending on what exactly fuzzy numbers mean. M-C is not so limited.
      • constantcrying 9 hours ago
        No. It assumes the numbers aren't random at all.

        Although fuzzy-number can be used to model many different kinds of uncertainties.

  • explosion-s 6 hours ago
    I made one that's much faster because it instead modifies the normal distribution instead of sending thousands of samples: https://gistpreview.github.io/?757869a716cfa1560d6ea0286ee1b...
    • etbebl 5 hours ago
      This is more limited. I just tested and for one example, exponentiation seems not to be supported.
  • vortico 9 hours ago
    Cool! Some random requests to consider: Could the range x~y be uniform instead of 2 std dev normal (95.4%ile)? Sometimes the range of quantities is known. 95%ile is probably fine as a default though. Also, could a symbolic JS package be used instead of Monte-Carlo? This would improve speed and precision, especially for many variables (high dimensions). Could the result be shown in a line plot instead of ASCII bar chart?
  • timothylaurent 13 hours ago
    This reminds me of https://www.getguesstimate.com/ , a probabilistic spreadsheet.
  • shubhamintech 1 hour ago
    love it! gonna use this instead of calculating my own extremes now
  • ashu1461 8 hours ago
    So is it 250k calculations for every approximation window ? So i guess it will only be able to calculate upto 3-4 approximations comfortably ?

    Any reason why we kept it 250k and now a lower number like 10k

  • lorenzowood 5 hours ago
    See also Guesstimate https://getguesstimate.com. Strengths include treating label and data as a unit, a space for examining the reasoning for a result, and the ability to replace an estimated distribution with sample data => you can build a model and then refine it over time. I'm amazed Excel and Google Sheets still haven't incorporated these things, years later.
    • montag 5 hours ago
      Thank you, I would have mentioned this myself, but forgot the name of it.
  • alexmolas 11 hours ago
    is this the same as error propagation? I used to do a lot of that during my physics degree
    • constantcrying 10 hours ago
      It doesn't propagate uncertainty through the computation, but rather treats the expression as a single random variable.
  • alex-moon 9 hours ago
    > The UI is ugly, to say the least.

    I actually quite like it. Really clean, easy to see all the important elements. Lovely clear legible monospace serif font.

  • po1nt 1 hour ago
    I love it! Now I need it in every calculator
  • throwanem 12 hours ago
    I love this! As a tool for helping folks with a good base in arithmetic develop statistical intuition, I can't think offhand of what I've seen that's better.
  • rao-v 13 hours ago
    This is terrific and it’s tempting to turn into a little python package. +1 for notation to say it’s ~20,2 to mean 18~22
  • croisillon 11 hours ago
    i like it and i skimmed the post but i don't understand why the default example 100 / 4~6 has a median of 20? there is no way of knowing why the range is between 4 and 6
    • constantcrying 10 hours ago
      The chance of 4~6 being less than 5 is 50%, the chance of it being greater is also 50%. The median of 100/4~6 has to be 100/5.

      >there is no way of knowing why the range is between 4 and 6

      ??? There is. It is the ~ symbol.

    • perching_aix 10 hours ago
      how do you mean?
  • vessenes 11 hours ago
    cool! are all ranges considered poisson distributions?
    • re 10 hours ago
      No:

      > Range is always a normal distribution, with the lower number being two standard deviations below the mean, and the upper number two standard deviations above. Nothing fancier is possible, in terms of input probability distributions.

  • chris_wot 9 hours ago
    There's an amazing scene in "This is Spinal Tap" where Nigel Tufnel had been brainstorming a scene where Stonehenge would be lowered from above onto the stage during their performance, and he does some back of the envelope calculations which he gives to the set designer. Unfortunately, he mixes the symbol for feet with the symbol for inches. Leading to the following:

    https://www.youtube.com/watch?v=Pyh1Va_mYWI

  • rogueptr 21 hours ago
    brilliant work, polished ui. although sometimes give wrong ranges for equations like 100/1~(200~2000)
    • thih9 13 hours ago
      Can you elaborate? What is the answer you’re getting and what answer would you expect?
    • BrandoElFollito 13 hours ago
      How do you process this equation ? 100 divided by something from one to ...?
      • notfed 10 hours ago
        > 100 / 4~6

        Means "100 divided by some number between 4 and 6"

        • BrandoElFollito 2 hours ago
          Yes, but this is not what op has. Their formula is 100 / 1~(20~200), with a double tilde
        • throwanem 9 hours ago
          "...some number with a 95% probability of falling between 4.0 and 6.0 inclusive," I believe.
  • BOOSTERHIDROGEN 5 hours ago
    awesome