A common approach to get around noise in the least significant digits of some computation is to use tolerances. I often have a difficult time to pick a reasonable one for my code, but of course any single number will never be an ideal solution. If I pick `1e-64`

for example then that is quite a big tolerance when I’m working near zero, but it may well be a very small (or even non-existent) tolerance when far away from zero. At some point the distance between two adjacent doubles will be more than `1e-64`

and the whole exercise becomes pointless.

Does anyone have any experience/advice about switching instead to a tolerance system which ‘scales’ with the numbers involved? Say, given any floating-point number x, the tolerance would extend from x0 to x1, where x0 is the 10th neighbour of x to the left and x1 the tenth neighbour to the right?