Hi all,

I was wondering if it was possible to use gcAllowVeryLargeObjects in a RhinoCommon plugin for Rhino 5. I have some very large simulations to build, and would like to avoid being limited by the .Net 2 GB limit on arrays…

I am told that this is a .net 4.5 addition, and that Rhino is currently surpports .Net 4.0. However, supported or not, in the past it has been possible to use later framework items by compiling to a later version of .Net framework than Rhino uses. If I compile to .Net 4.5, is there a way I can modify this configuration item?

Under normal circumstances, you would modify this by changing a config file. See the following:





Just out of curiosity: what kind of data would you store in an array, that would make the array larger than 2GB?


A finite difference time domain domain calculation for the simulation of room acoustics.

It is similar to a Finite Element Method simulation for sound. Most simulations of this type aren’t quite so intensive… but where sound is concerned, you need to have a density of nodes with at least 6 nodes per wavelength.

I have also run into this problem when storing a mapping calculation of sound impulse responses… each location on the map holding several thousand double precision numbers.

Trust me, it is necessary…



(David Rutten) #4

Hi Arthur,

would it make more sense to store this data in nested arrays or sparse arrays or maybe linked lists? Should be no problem at all storing several thousand numbers in one array, and if you store that one array in another array then the amount of memory taken up by the several thousand numbers shouldn’t matter, as what gets stored is a pointer to an array, which is only 32 or 64 bits.


(Steve Baer) #5

I would agree with David. Requesting contiguous allocations of that size is going to be problematic.

Unless your algorithm absolutely requires contiguous address layout, you could instead create some sort of chunked array class which looks like an array and acts like an array, but performs the allocations in chunks. The overhead is going to be really small.


David: Nested arrays yes… sparse arrays (since the entire cubic array is significant)… no…

So you are saying that this array:
would not be as restricted by the limitation as this array:

If so, that might solve the problem for the FDTD for the time being.

I had considered building a class that indexed like a typical array, but stored the data in multiple arrays. Is this what you had in mind, Steve? I had been avoiding this for simplicity’s sake, but if I must, I must…



(Steve Baer) #7

Yep, this is what I had in mind. It probably wouldn’t be very much code to implement.


I’m not an expert in FDTD or FEM for that matter, but looking at the specs of available software, you should be able to split your whole solution space into independent, more manageable chunks.

What sort of problems are you trying to solve? I mean, there’s a reason why simulation in acoustics often image source or ray tracing methods.


Hi Hannes,

As I understand it, splitting the domain is more practical in the frequency domain. To do this in the time domain requires a lot of intercommunication between model domains. I use the FEM in frequency domain for a lot of things… That said, I think Steve’s and David’s ideas will provide an adequate fix for now. I was just hoping I could do this using a standard .Net config element.

I’m not sure this is the place to discuss the techniques used in acoustical simulation… feel free to move this off-board. Image source and ray-tracing only works for larger spaces, and only at high frequencies… Were you to try and use them for low frequency problems or for small components in practice, you may find yourself in a dangerous place.

That said, large spaces and high frequencies are incredibly memory intensive to simulate numerically.

Know your tools… each has a suite of applications, and a suite of limitations. In acoustics, I am pleased to say that we have tools that can do nearly anything these days, but none of them can do it all on their own.