I added a small GH script.
In the solution the parallel computing runs way slower, but in my mind it should go quicker if it would use all the cores parallel for each surface calculation.
Can someone explain me why a single thread perfomance is so much quicker?
I am running on Intel Core i7-11800H (laptop).
I didn’t know this plugin. Checking it out now but a bit a work around to get contours. Still my question remains why GH is calculation this way. Maybe I should check out Rhino 8 Beta, I think they changed some stuff there.
There are various reasons why this could happen. Reading that from my mobile, I could not investigate your example. But just to name some things… often its not the operation which takes time, but piping the data. If you divide an operation to run in parallel, you need to make things atomic. You copy data into different collection (allocating memory), you start and await all threads and you might lock critical piece of code to make them threadsafe. You also disable optimization from the underlying functionality, like caching etc. And you might also simply do not do the same, just in parallel. 12s vs 0.1 is quite a difference. Are you sure its the same you do here? Have you tried to graft the data?