Multithreading

A number of commands are clearly not multi-threaded. I am working with a very large model created by someone else, converted from mesh to nurb and now using mergeallfaces. I have seen an earlier post (MergeAllFaces - Why so Slow?) about it but it was a while ago and before the mac version was released. The feature request has no comment or progress update.

My question though is a bit more than just mergeallfaces. Even moving a large number (10,000+) objects to a new layer takes a very long time but is only using a single core. Surely, if any process is ideally suited to multithreading, something like that where each object is just having its properties updated without any knock-ons to other objects in the same selection it is just a massive collection of simple, parallel tasks?

Can there just be a wholesale update to make more use of multiple cores that almost everyone now has? I appreciate that some commands might not benefit or even that they might be impossible to multithread but it feels like there is a lot of ā€˜low hanging fruitā€™ that could be implemented fairly easily?

1 Like

Itā€™s my presumption that you would only want us to use multiple cores when it provides a significant speed increase. Is that correct?

Hi dan, yes that is correct. We are Architects and as such deal with very large models, often with aspects modelled by others and not necessarily the cleanest geometry.

I have recently been working on one particularly large example where mergeallfaces was taking half an hour per operation. One model I didnā€™t even get that far because meshtonurb took over that amount of time before I forced it to quit because I needed to get on with something.

I am using a MacBook Pro retina with 2.7ghz quad core and 16gb ram (mid 2012 model).

When looking at the activity monitor, it is quite frustrating to see only 1/4 of the processing power of the machine being used for what I think are highly parallel tasks. Perhaps I donā€™t fully understand the maths behind it?

Thanks
Robin

Hi Robin,

Out of interest, what is the other software using mesh representations when a polysurface would be a better option. What file format is being used for the exchange?

The best option of course is avoid the meshing in the first place, but this isnā€™t always possible.

Iā€™ve been working on some routines to merge faces outside of rhino in IFC files, including conversion to light weight extrusions where possible (which will have a much better performance in Rhino).

Itā€™s possible I could thread these conversions outside of Rhino, I still need to improve the core functions first though.

Cheers,

Jon

When doing architectural competitions often you are provided with geometry. The current one is a mix of .ifc and indirectly, sketch up files. I tried to use the IFC import function of geometry gym but the model appeared to be too complex and when it did complete the geometry was not at all what was expected.

In such a situation, there is little opportunity to ask the competition organiser to provide the data in alternative formats. Otherwise, I agree - avoiding meshes seems to be the best route to happiness!i

Robin-

Thanks for the clarification. Though I am somewhat confused by thisā€¦

If all the processors were in use and it took the same amount of time to preform the calculation, I am presuming this would not be considered an improvement. Am I wrong about that?

I believe I understand what you are asking, I just want to better understand what would be considered an improvement. For example, imagine the code for a calculation that you perceive as being ripe for multithreading is 2x as fast on a single speedy processor, but when multithreading is employed, it is 1.2 times as fast on the same processor. (Since weā€™re in this imaginary world, I might say: ā€œI want to pick how many processors I use for the task.ā€) But many tasks become slower when there are more cooks in the kitchen. Letā€™s take a specific exampleā€¦

We are currently in the long process of overhauling Make2D over in Rhino for Windows land (we hope to bring these improvements to Rhino for Mac too, but thatā€™s beside the point). Weā€™ve done lots of work on this notoriously long running command and tried many parallelization strategies. One of the strategies produces faster results on multiple processors using multithreading, but the speedup is certainly not 4x for 4 processors. Make2D is probably a special caseā€¦but it is faster and will produce faster results with multithreading, but not under all conditions. That said, I do think there are some commands - perhaps MergeAllFaces is one of them - that are more embarrassingly parallel than Make2D.

I want these commands to be faster and produce better results. (From my perspective, I donā€™t care if they use 1 processor or 10). Do we want the same thing?

Dan

Of course we want the same thing. My only way to know whether something is multithreaded or not is to look at CPU utilisation. When something is massively parallel (like changing the layer of 12k objects) and Iā€™m sitting there waiting for it to finish of course it is natural to wonder why the other three cores are not helping with the load.

Mergeallfaces, meshtonurb, changing layer of objects etc etc. There are loads of similar functions that can take a very long time when operating the command on multiple objects.

I am slightly confused as to why you thought I might be after anything else!

Robin

Maybe itā€™s not so parallel as you think. There is one object table that contains all the objects in the file and Iā€™m not sure you can massively parallel write to that one tableā€¦ Maybe Iā€™m wrong about that though.

I do agree when something like MAF is running on multiple objects it could go parallel, the problem is itā€™s already really slow when used on just one semi-complex object and that canā€™t necessarily be be parallelized as far as I know either.

MeshToNurb might also be a candidate, you can certainly use parallel to create all the brep faces, but you still have to join them afterwards into one, and thatā€™s probably the slowest part.

ā€“Mitch

That would be entirely fair enough (I did say I didnā€™t necessarily know enough of the detail to be certain). Perhaps an assessment of all commonly used commands with notes / commentary as to their feasibility would be helpful. Otherwise it feels a bit like shooting in the dark.

How about this as a summary for a feature request:

ā€œWhere there would be a strong speed benefit to multithreading a command it should be implementedā€.

This may not be all done at the same time but could be rolled out over a series of updates. In construction we use a system of risk analysis that assesses severity of risk against likelihood of occurrence. A similar tool could be used to weight different commands. For example you could give scores to each of the following:

  • likely scale of speed improvement
  • effort for implementation
  • how popular the command is

If each of those were given scores and then added together you would get a sorted list of tools that would help the most people with least effort and with the greatest effect at the top progressing down to those with little effect, that would take a long time and that is rarely used.

Perhaps tools like this are used already?

Hi Robin - what is the goal of the MeshToNurb operation for these very large meshes? Why do you need a polysurface?

-Pascal

Would a complete assessment and potential overhaul of all ā€œpopular commandsā€ to make them multithreaded be a larger priority than Layouts in Rhino for Mac? Where would it fall on your priority list? Is this this your largest pain point?

FWIW, answering this question for Make2D took about 6 months and one developer working nearly full-time on the issue.

Hi Pascal

I like to work in nurbs, partly because I think it is cleaner for a single model to be one or the other and I am much more familiar with nurbs than meshes. I also like to leave open the possibility of using make 2D. I will also likely need to combine elements of the mesh model with elements of the new model (which is nurbs).

Basically, meshes are not really an option.

Robin

Dan

I get the feeling that I am rather irritating you with this thread. I am sorry if that is the case, it is not my intention in the slightest.

To answer you question, yes. I have no desire whatsoever for the layout feature. I have never used it and most likely never will. Rhino is not our only tool and we happen to use Bentley Powerdraft for setting up 2D drawings and layouts.

I suspect for the AEC market that very few use layouts in rhino because we have other tools that are likely more specialised (better) for such tasks.

I think if you take 6 months to determine the answer for such an analysis as I proposed then something is not working quite right. They are intended to be assessments based on the knowledge of experienced people. Of course, you donā€™t know 100% at that moment whether the assessment is accurate but it allows a focus of effort.

Robin

Not at all. Iā€™m sorry if I sound irritated; thatā€™s not my intention. I want to better understand what you wantā€¦I think Iā€™m getting there.

I worked in Bentley Architecture/Microstation for 4 years; I seriously miss AccuDraw.

Itā€™s good to know what performance bottlenecks are high on your list for improvements to Rhino. And that they rank as high as they do is also useful information to me. (All I see is pain-points with Rhino, but thatā€™s sort of the nature of the job).

That might be true. In our case, we have limited resources and we rely (myself included) on the assessments of a small group of people (who do have PhDs in math and lots of C/C++ development experience).

Agreed re Accudraw. We use powerdraft for 2D drawing creation and I love Accudraw. I know people find it equally useful in 3D. As a method for accurate input of geometry it beats anything else Iā€™ve seen, and is in a completely different league to anything Autocad has.

I appreciate that adding important features is likely to come ahead of optimisation of ones that already exist but that doesnā€™t mean it shouldnā€™t be a goal. I guess if a tool is being rewritten anyway (like make2d) then it is easier to justify the resources. I think it would just be nice to know that some of the most commonly used tools are in line to be updated at some point.