Anemone (for Reinforcement Learning) getting slower at each iteration

I’m working on a project where I set Grasshopper as an OpenAI Gym Environment to train a system to learn how to place modules in a 3D Grid in a way that maximizes a series of goals. For this I use many simple GHPython components where I update a 3d shaped array of Integers according to the action taken by an action inside of an anemone loop.

The thing is that, as iterations grow, my definition seems to become slower. And i’m not even dealing with geometry inside of the loop, the only thing that is done is to get the array at the last iteration, get the action taken by the agent, update the array accordingly, and count the number of occurences for specific values in the array. So complexity of the data shouldn’t be growing.

I’m also using the sc.sticky to store the stats of the previous iterations (To avoid wires, mainly). Is that problematic in any way?

Thanks! If I haven’t been clear, im happy to provide additional info.

Im attaching my definition here.

By the way, I will need to heavily optimize my definition and there are 2 fronts where I think I could improve it:

  • I’m using a lot of for loops to translate datatrees to arrays. Frankly, any datatree structure with more than 2Dimensions seems a bit of a pain when using GHPython scripts. If there is a better way of doing this, I will be happy to know it.

  • At the end im placing the modules in their correct positions (Rendering the environment, in a way) by duplicating the original modules in a datatree according to the array of values(or index) , and then moving them all to the points that correspond to coordinates inside of the array. It seems a bit of a messy way, and there is probably some better way that I don’t know.

Thanks again!
ghgym_grid.gh (49.6 KB)