Every document database transaction causes an Undo record to be created. It is the Undo stack that you perceive as memory leaking. Try running the ClearUndo command - you should see some of the memory recovered.
Do you really need to copy the object (add it to the document) and then delete it later? First, writing to the document is very slow, and as you saw, all those document changes get recorded in memory.
If possible, it would be better to use “virtual geometry” - by actually diving down into RhinoCommon and doing your operations on curve/brep objects stored in memory but not added to the document - if you are going to delete them afterwards. Only add the final objects you want to the document when the script finishes.
However, working in RhinoCommon will add a layer of complexity to the script.
I ran some tests as I was about to suggest ClearUndo as well, however ClearUndo did not seems to clear/free any memory:
I ran the script snippet below causing the memory to rise from 3.7 GB to 7.8 GB:
import rhinoscriptsyntax as rs
s = rs.AddSphere([0,0,0], 25)
for i in range(1000000):
tmp = rs.CopyObject(s)
rs.DeleteObject(tmp)
However no matter how often I ran ClearUndo afterwards, the memory was not freed up. Even doing some adding removing of objects undo-ing redo-ing and more clearundo’s did not free the memory.
Only closing the instance freed the memory:
I’ve reported similar things a long time ago with various similar operations, for example moving objects in a file for an animation. I don’t know if it was ever resolved, but at the time Rhino would eat memory until it was all gone. Even after the script finished, ClearUndo did not free it up, only closing the instance did… Haven’t tested in a long while, though.
Thanks for the reply, I’m not sure how to interpret this:
Does that mean whenever Rhino or another application demands more memory than available, Rhino will free up unused memory form, it’s ‘pool’? Or could in fact Rhino ‘hijack’ this memory until closed?