The transformation likely makes the Brep invalid, and since invalid breps are a crash-risk they are removed. The problem is figuring out a way to *not* get invalid breps in the first place.

So yes, we’ll need those breps and the transforms you’re applying to figure out what’s going wrong where and whether or not it’s solvable. It might not be. Numbers very far away from zero have way fewer decimal places available than numbers close to zero. Whenever you have two small numbers which are definitely different but you add some very big number to both of them they may well either become identical or their difference may balloon depending on rounding.

So any time you move something far away you lose a whole bunch of accuracy in your digits. This effect may compound if you do it repeatedly (depends on the steps in between the large transforms).

Consider this as a parallel:

Assume that numbers in the computer are written in the form 0.xxxx \cdot 10 ^{\pm ee} where you only get to pick the x and e digits. Smallest possible positive number is thus 0.0001 \cdot 10^{-99}, and the *next* smallest positive number is 0.0002 \cdot 10^{-99}. The distance between these numbers is absolutely tiny. Like, we can fit more of those distances into a Planck length than you can fit Planck lengths into a lightyear.

But when we’re trying to represent big numbers, like 23,\!756, then we fail already. Best we can do is 0.2376 \cdot 10^{+5}, which already rounds away the last digit. The distance between consecutive numbers at this remove from zero is ten. You can’t even increment by whole numbers, only multiples of ten.

Now imagine adding a small number like 0.2500 \cdot 10^{0} to a big number like 0.6000 \cdot 10^{+4}. Should be 6,\!000 \frac{1}{4} right? Wrong. It’s just 6,\!000 because there simply isn’t room for the least significant digits.