I mean in the end Rhino is the Swiss army knife of CAD. It can do a lot, but its probably not your best tool for your job. And because Rhinos most important feature has shifted from Surface Modelling to CAD automation, I would guess that somehow the mesh tooling is becoming more important for a certain group of people an their tasks, nowadays. Still, when we talk about importance why meshes are preferred in architecture, I still believe its to avoid the complexity when dealing with surface data and not so much the redundancy when dealing exclusively with non-curved geometry. Anyway, this all is not really related to the initial question. I just think when you put a lot of attention to proper modelling and detail, you can indeed create shapes quite efficiently in GH. Definitions become slow, when you do more than you should do, and when you don’t understand what is happening under the hood. You always find ways in improving performance, but of course you need to understand where the bottleneck is. I have seen a lot of slow definitions, because of inefficient data piping.
I use meshes to have approximation of surfaces, to pre-build topology, to detect collisions, for fast visualization, etc etc etc… all this, as you said, while my final target is still a Nurbs model! Indeed!
That’s the difference from having a tool that compute and display real-time from a tool that lags at any input change and feels on a 386.
(By the way, imho, Rhino is already more than good with meshes.)
Could we do everything with pure nurbs? Maybe, but it would really unwise programming-side (extremely slow performance and sort of unreliable/unpredictable results). Happy 386 performance!
Faster mesh methods would be gladly welcomed by all, because we happens to use meshes to create nurbs. Look at SubD: SubD are meshes … but SubD are also nurbs!
(I’d really love to have SubD methods on par with Meshes. Currently SubD rhinocommon is a joke…)
I recall back in the early 00’s, when SubDs became the preferred option in the CG industry. Folks liked using it for better UV mapping, and for its multiple subdivision levels, which were essential for sculpting with ZBrush, for example. Sure, they compute much faster than NURBS. But they often have a different use case.
I think some confusion may arise when use cases overlap. There are many scenarios in AEC in which you could use either NURBS or SubDs. But as a rule of thumb, if you don’t need double floating point accuracy, than maybe you don’t need NURBS (yet). If you’re not designing something curvalicious, than why not just go with meshes? Rhino has gotten much better with meshes, and with time I’m sure it will only get better (hint hint). I use meshes more often than NURBS, and GH performance on large models is totally fine for me. Just saying.
SubD surfaces are not NURBS surfaces. However a SubD surface us an exact NURBS polysurface equivalent except near any extraordinary/star/special points.
The equivalence does not work the other way. Many NURBS surfaces and polysurfaces do not have an exact SubD equivalent.
I explained myself poorly, sorry.
I use SubD daily, I see them like a “bridge” from meshes to nurbs.
SubD “control net” is nothing different from a mesh: vertices, edges, faces, ngons, topology, etc. Conversion from one to the other is instant, but later on the workflow we convert SubD to nurbs, where the usual stuff happens (booleans, fillets, etc).
I wanted to point that: any effort into improving Mesh methods, reliability and speed is useful also for who works with nurbs, because any improvement is 1:1 translated to SubDs.
(Not if you don’t use SubDs, obviously…)
Maybe I’m wrong. I perceive the situation this way, currently.
See quadremesh, the output can be translated to SubD. It make sense.
Shrinkwrap > quadremesh > SubD > Nurbs… we are getting smooth but complex shapes, incredibly easy. Mesh methods applied and becoming useful on nurbs context.
They are seriously slowing down my script and can take hours on some projects (LBT+ HB simulations of solar radiation with thousands of sensor points across gigabyte size arch models and every hour of the year)
See that is the problem. Its naive to believe that multithreading is the holy grail of performance optimization. Given this problem, it is likely the bottleneck is not related to the calculation involved. Of course you can vectorize (“SIMD”) multiplications and use various other tricks to speed up transposing and mass adding numbers. But its likely not the calculation. Instead the bottleneck is inefficient data piping. Grasshopper was not written with the intention to deal with lots of data. It was created to simplify automation for non-programmers. Making it very modular and simple to use. But if you look at the implementation’s, it is the most straightforward solution a programmer can think of.
If you have such an use-case, please create a thread in this forum with an minimal example. You find people, which are very good in optimizing scripts. I’m pretty sure you can improve performance much better…
Hey Anders. I’m curious about why those components are taking hours to compute for you.
I don’t recall ever having components take that long.
Would you be willing to send a definition that has this slow performance?
I’d take a look at it, and maybe we can figure out a way to speed up its execution.
It happens when I run high definition simulation on large models to visualise the impact of architectural features on the production of a customized solar facade. Often there are ledges and window frames that throw shade on the fully panellised facade next to it and I need to understand what the impact is and how far to move the PV cells away in each panel design for shade not to impact electricity production. …also I need to generate an 8760 that shows aggregated production to evaluate placement and over all production potential.
I recently ran a simulation on 17000 sqm facade spread over a group of 13 multi-story residentail buildings that cast shadow on each other and with lots of protruding roofs and gables…the memory usage as around 120 Gb and it took hours to complete … most of that time was in grasshopper as I early on saw the HB Python terminal widow closing.
I hope this outlines the problem, but as I do this for clients that require confidentiality, I can not share the project file and hope the previously shared file can point to the issues even if it is small.
For me personally, the region components (union, difference) should become a lot faster, right now it’s just not fast enough to give real time feedback.
That said, most of the issue would be solved with a better offset algorithm, because I mostly use the region union tool to boolean together closed curve offsets.
In the past colleagues have complained about Python’s performance within GH.
I don’t use Python scripts in GH often, so I’ve never experienced this bottleneck.
Have you tried Pollination? I thought it was designed to address these kind of performance issues.
Hi @anders , I’m not saying it isn’t possible but I highly doubt those components are the bottleneck in your definition.
That being said if they are… I would check your data matching.
What does the tree structure/size look like going into the Multiplication component look like?
What is the B and C value(s)?
I ask because sometimes grafting an input paired with a flattened input can turn a millisecond execution into multi minute execution but with proper data matching, it can again be near instantaneous.
I guess in the other post you started I’d like to see what happens before and after these three components.
I won’t the find the time to help you on this one the next days. But just for reference. To compute 20_000_000 multiplications with creating the partial sum for each 2000 items, within an Net 7.0 Console application, my medium spec Notebook takes around 100-200ms. Without any optimisation, just having two nested loops. So its about how you combine the data and compute things in one go. Other than that, the worst bottleneck seems to be Ladybug here, but I’m sure you can work around this as well. If you are able to write code, the first thing I would do, is combining components into one step. This way you reduce the amount of loops and the amount of memory allocated. This alone, will have a huge impact.
Thanks @anders , sorry I missed this. I see that you have matching branch path counts (2,270) with your multiplication component inputs but one tree has 8,760 items per branch and the other has 1 item per branch. So you are multiplying 19,885,200 times in that component alone. Perhaps that’s needed but I have a feeling it isn’t as I see lots of mixing of flattening and and grafting prior to your multiplication component.
I’m pretty confident the primary issue (at least in this scope) is data matching and not the components themselves.