Thanks for typing this up. I spent a bunch of time thinking about this approach last night.
I see what you are doing, but boy does that get complicated and I can’t tell how you would link the results together with what was calling it. The nice thing is that the compute server API allows for this pattern in order to bulk upload. I definitely want to support “multiple” style calls from the C# SDK, but at the same time I really want to keep this as simple as possible.
Going off of your concept of creating pipelines of objects, I’m playing around with having “Bulk” versions of the compute functions which return an object that encapsulates the function call, json data, and return information. I implemented this as a ComputeBlock generic data structure which holds on to the function name and json data. The compute functions would return ComputeBlock objects and then you would pass an array of these to the ComputeServer class for processing. This technique got me down from 45 seconds in the “serial” mode to around 8 seconds using ComputeBlocks. Essentially the same thing as you were doing, but hopefully getting some of the complexities out of the way.
Here’s the feature branch for ComputeBlocks based on your prototype.
I’m autogenerating the RhinoCompute.cs file from our internal RhinoCommon source and cam probably automatically create versions of all of the functions that return a ComputeBlock as well as the current serial return style.