Hello people,
I am looking to build a PC that would be optimal for running large grasshopper scripts with several parameters and surface subdivisions that my dell XPS 15 laptop just can’t handle, or can handle in six hours which kinda defeats the purpose of parametric design. I dont know a thing about building computers so any help or references would be much appreciated.
Well you need to look more at the efficiency of your code than throwing hardware at it, no computer is going to turn 6 hours into minutes, a monster rig might be 25 percent faster some of the time. It doesn’t scale magically with more cores.
yes, I am aware that my scripts are sub-optimal any recommendations for improving them in general? I know Data trees are very important but I can’t seem to find many resources out there that explain it in a comprehensive way, seems to me like guesswork sometimes when my data structures, don’t line up. Anyway, my previous question deserves a thread of its own, but I am planning on building my own PC because that 25% would still be an improvement. where might I start? I was doing some research and it seems that multiple CPUs working together might be a good route but I still don’t know how to do that.
Parallel processing has to be specifically coded for…and the added ‘overhead’ has to be low enough for it to actually speed things up at all.
what do you mean by added overhead?
It takes time and computer cycles to divide up a job for multiple cores, then combine the results back together.
That’s a very simple way of thinking of the “overhead” of multi-threading.
For people who are ordinary users and not concerned with programming for multithreading it’s actually the perfect way of thinking about it.
Yes, this makes sense, so would you manually write the code on a per-task basis or somehow give the computer general instructions for all tasks, hehe that might be a little complicated to do for me since I basically have no background in computer science.
Rendering is probably the easiest multithreading task to visualize.
Once the model is done, materials assigned, scenes established, and lights setup, the job can be divided up into different rectangles of the final image.
Each core can work on it’s smaller piece of the image at the same time the other cores are doing their piece.
When it’s all done, all of the small rectangles can be reorganized into the final image.
At some point, the overhead required to split up the job, assign it, and combine the results when each piece is done, takes longer than doing the job in fewer chunks.
Well…yeah. There are libraries to help. Oh, and if you do it wrong your threads will screw each other up and your code will crash.
But long before that you need to look at the overall architecture of what you’re doing, see what can be streamlined there. Without actually seeing the definitions, not much anyone can do to help.
Yeah, I can send you my definition, but I just want to clean it up first before so it doesn’t just look like a giant mass of spaghetti to the poor souls trying to help me. I’m a little preoccupied with other tasks for school right now so I don’t have time to clean up my definition immediately, but I’ll send it out in a few days.