# Geometry "weight" in terms of Data size, manipulation time and time to OUTPUT/INPUT

I have some geometry, a latticed surface that takes time to create with paneling tools. Once created, I want to use use it in the model without a large overhead for moving it with a slider, loading it from disk etc. I made a low definition mesh of it which is totally acceptable, but the only way I have to know how much “lighter” it is, is by looking at the read/write time to disk. Is there some way to weigh geometry thats a little more robust?

There is obviously the apples/oranges problem of weighing nurbs surfaces against meshes, and processing time of nurbs objects vs mesh objects, but this doesnt have to be precise, just an indicator of heavyweight geometry

You can use a Data Dam component with some time interval, between the slider and the process, to just compute the last change in the slider.

The rest I didn’t understand. You can estimate the weight of a geometry by adding up all the bytes that use its construction elements. For example, for a mesh, Number of points * 3 (doubles) * 8 bytes (a double weighs 8 bytes) + Number of faces * 3 (integers) * 4 bytes (an int weighs 4 bytes). However, I don’t see why you would need to do this.

Its a metric. Im calling “weight”. It might also be called size, or density or Kb. Its just to give an idea of the heavyweight geometry so that i can lay it off of mesh it out at low density for downstream evaluation. Like doing a low res render to get an idea of what the result will be without the days of ray tracing.

No idea how you’re approaching the problem. Anyway, you don’t need the bytes, comparing the amount of points/faces you get an equivalent result, since all the other factors are constant.