Hi @clement,
thanks for sending your comments. The focus in the new code is reliability; and there is a lot of work to do already just there. 
So, really, the goal now is not to demo how much faster this new code is – a better aim is fixing all bugs of the previous implementation, or at least all the ones that really matter for everyday usage.
While overall performance is very important, there is likely space for optimizations. Again, optimizations are a secondary goal at present. I still took some time to test your model. Here the results. You can test with the same code in today’s WIP.
1.
Splitting the mesh with all cutters at the same time, all at once, via command:
V6: 0:00:05.914000
V7 WIP: 0:00:04.312000
Better: V7 took 27% less time.
Tested with this script:
testMeshSplitCommand.py (271 Bytes)
2.
In the script you wrote, I added a simple timer and this line:
options.MaxDegreeOfParallelism = 1
With this (negative) change, we can compare the old code and the new one, because at present the old code does not work in parallel.
This is what I get:
V6: 0:00:07.640000
V7 WIP: 0:00:08.214000
Worse: V6 took 7% less time.
3.
If we turn on multithreading in your script, we get this:
options.MaxDegreeOfParallelism = -1
V6: 0:00:01.691000 (only 5 out of 11 pieces were split), which would put all splits at 0:00:03.7202.
V7: 0:00:02.190000 (all 11/11 split)
Better: V7 took 41% less time than what it seems V6 would have taken to reach the same result, but this is not really an apples-to-apples comparison.
This is your test script, modified as mentioned:
testMeshSplitLoopNoParallel.py (1.4 KB)
Thanks,
Giulio
–
Giulio Piacentino
for Robert McNeel & Associates
giulio@mcneel.com