OK; in a way this is good new. I can see a large increase now.
Indeed - good news in way
I think the memory increase is to be expected when running such a large study using Ladybug. Computing hundreds of thousands of intersections produces a lot of data and none of it gets written out to files when you use the “LB Direct Sun Hours” component. The Ladybug components are really intended for smaller studies and they prioritize quick calculation and feedback over the ability to scale.
If you want to run a large study like this and you want it to be able to scale, you should be using the “HB Direct Sun Hours” component. It outputs the same metrics as the “LB Direct Sun Hours” component but it computes all of the intersections using Radiance (outside of Rhino/Grasshopper) and it stores the big matrices of the individual intersection calculations in external files (.ill files) such that you only load the total number of sun-hours into Rhino/Grasshopper.
We will be adding Honeybee recipes for the other types of Ladybug studies (views, radiation) soon. But, generally speaking, Ladybug is for short/quick stuff and Honeybee is for large scale stuff.
Thank you @chris12 for the clarification, and @ParamDesSing and @gankeyu for your input in this thread. I assigned the memory bug -RH-63426- major priority because I think this is something that Rhino can do very well. We just needed to have this input, and work on it.
Thanks @chris12! You’re likely right that using Honeybee would solve the issue, but I just find it weird that something that works well in Rhino 6 doesn’t work well in Rhino 7. In a practical sense, I solved the issue by using Rhino 6 .
@ParamDesSing it didn’t work well in Rhino 6, sorry. We had to switch to new code because in Rhino 6, some known cases where rays would hit, would not be accounted for. In particular, some very skew rays or tiny triangles.
@piac , thanks for the info. I’d say that sounds like combining the strengths of the old code (no increasing memory consumption) with the strengths of the new code (more precision)
@ParamDesSing I’m pretty certain that the memory leak is fixed in this early 7 SR7 candidate: 7.7.21125.01001. If you, @gankeyu or @chris12 have a spare minute, I would appreciate if you could let me know how it works for you.
for Robert McNeel & Associates
@piac As far as I can tell, the memory leak is reduced, but still present.
Running the optimization for 1,000 steps with the version you provided used to increase memory by 6GB, now it’s only around 3.5GB. In other words, it will still crash eventually. I have to close Rhino to free that memory.
Sorry that I didn’t read the full thread before I posted and thanks for addressing this, @piac . I will try to test it when I get the chance.
In the event it is helpful here’s the method that is being used by all of the Ladybug studies that have the issue:
So it’s definitely the
MeshRay method and we chose this method in RhinoCommon because we needed something really fast and scale-able and prioritized these things over accuracy. I understand that you would want to make the method more accurate, but if you end up implementing any optional arguments on this method or alternative classes that give us the exact functionality of Rhino 6, we would definitely use them.
Thank you for your message. Yes, the method in Rhino 7 already works better than in Rhino 6. For us, accuracy is paramout. I’d need to time Rhino 6 to see measure differences, but from studies on other usages, I think it’s similar in order of magnitude. I apologize about adding an overload that resolves, in IronPython and with that particular IronPython
out trick, before the one you were using in Rhino 6.
I’d be happy to organize a quick chat about ways to tackle this particular problem. I have at least two options in mind that would make the program run better, unchanged in Rhino 6 and hopefully better in Rhino 7 and beyond. I highly value your input and would be happy to organize this.
However, back to the leak. After spending several hours debugging the sample by Thomas above, I did figure out the source for the native leak, but I’d need your help tracking this second memory leak that happens -it seems- in IronPython memory. This one is much more serious. Because I cannot get his sample to work in Rhino 6, I would need your input with that. It might well be an IronPython issue itself, or caused by an (in itself) good Ladybug update. Or something else. But, here is what I think the memory increase looks like:
Native memory looks fine, so certainly the problem happens in .Net, and apparently with IronPython types that do not get freed. It looks like a huge Python list that contains another IronPython list of doubles. I’m going to try other profilers to see if I can get a clearer picture, but you might be able to tell more easily.
for Robert McNeel & Associates
Hi @piac ,
I am definitely happy to discuss different ways to tackle this particular problem and please feel free to send me an email to organize something. I realize that I should also do some more rigorous testing of performance before I suggest trying to make a Rhino6 look-alike method. Obviously, the best solution would be to have both your improved accuracy and similar speed/memory if we can manage it.
I think the only point I was trying to make is that the Rhino 6 MeshRay intersection method was already more accurate than what people would get using “typical default” Radiance parameters (Radiance being the state of the art for this type of study). And, while people can dial Radiance parameters up to get the accuracy that they need, the fact that the MeshRay method in Rhino 6 was already better than our Radiance defaults meant that it was far exceeding the accuracy I’d expect for a quick ray tracing study using RhinoCommon.
In any event, using @ParamDesSing 's well-constructed example, I can see that everything is working as I expect it in Rhino 6. The process takes ~2.8 GB of my machine’s memory and runs in 1.4 minutes:
In Rhino 7 before your fix @piac , it takes ~6.7 GBs of my machine’s memory and, runs in 2.1 minutes:
In Rhino 7 after your fix, @piac , it takes ~2.8 GBs of my machine’s memory and runs in 2.1 minutes:
So, from my perspective, I think you have effectively patched the whole memory leak, @piac . If @ParamDesSing could do a comparison like the one I did above, that would be helpful. I see that the Rhino 7 runtime is still longer than Rhino 6 but it’s only an increase of ~35%. So I could see myself being won over to the Rhino 7 improvements if I better understood how common the cases that they fix are. In any event, I’m happy to discuss the nuances as I mentioned. Thanks for all your support here @piac !
Also, @piac , I think the big “list of lists” that you reference is the
intersection_matrix in the code sample that I shared. So that’s the part of the calculation that is ultimately useful and we want to keep in memory in case people want to analyze that data. I imagine that this matrix is why the Rhino6 process still takes 2.8 GBs. In any case, there’s no need to optimize for this part. If @ParamDesSing needs that level of memory optimization, I think they can just comment out this line of code in the Grasshopper component and add a
del int_matrix at the end of the component. Then the matrix should get deleted from memory and not put out of the component.
I verified that
del int_matrix is at least partly working as a solution if more memory optimization is needed:
@piac , thanks for looking into this! What’s your issue with the example file in Rhino 6? For me it runs fine, and, as I said, without memory leaks .
I’m observing something other than what you notice. I’ll send a PM to set up a call.
This does appear to be a general case for “large” GHPython computations. Where Rhino 6 was slightly slower than Rhino 5, and Rhino 7 is noticeably slower than Rhino 6:
It might well be. There was also a memory leak earlier. That is fixed. There are some possible optimizations also, maybe.
Contrary to what I wrote earlier, I now think that the update fixed the memory leak completely.
When I’m testing the file with optimization (i.e., the definition is recalculated at every iteration) I don’t see a memory increase anymore.
I still see an increase in the test file, but that is because it calculates all the instances at once.
(I removed the optimization to make it easier to share and use, but actually this leads to misleading results.)
In other words, with the memory leak, recalculations don’t make a difference, but without they do, which is as it should be. There might still be issues with large files, but my original problem is solved.