I’m intrigued by the problem (I’ve seen in the AEC industry at least) of tracing demand on Rhino Compute servers back to individual teams or projects, for cost reconciliation.
This is one contribution I came up with and I’d love to hear your thoughts, at whatever angle comes to mind.
It shows how the duration of a solution of a GH file run on Rhino Compute using Hops can be calculated and sent to Azure App Insights, along with tags which differentiate the team/project/etc. The IT/Finance team could then use these logs to charge cost of the server back to the teams based on these durations.
Assumed context: many teams working on GH files, using Hops to run computationally-heaving GH logic.
The IT team have created an Azure Application Insights resource, which generates an Instrumentation Key that they give all teams
Each team member has their Grasshopper installations set up with one Rhino Compute URL in File → Preferences → Solver
Each team has been given a template for use in Hops definitions, which looks something like this:
When the teams open and alter the inputs to the Hops components, the IT team will see custom log entries like these appear in the Azure App Insights resource they set up:
The durations are thensummed per set of tags and the cost proportion of this charged back to the team/project.
Some of my own thoughts:
I wanted to see if I could avoid using any plug-ins for this solution, and in the process found that the only way to make HTTP calls (e.g. to Azure) from C# scripts within GH files is using some deprecated/obsolete parts of the System.Net library; maybe the use of something like Swiftlet might have been better
there might still need to be some automation on the Azure side to process the logs to make this financial reconciliation feasible
it does rely on good faith among the staff using the shared Rhino Compute service
using durations to calculate proportion of demand are not exact since they only describe the duration of one thread on the server and some costs are by time, not CPU usage, like Rhino licenses
Your thoughts are very welcome - thanks in advance!
Cheers - Nic
P.S. if you have DefinitionLibrary installed you can find the named clusters you see in the first screenshot, as well as the example, appear in your searchable list of definitions by adding this URL to your connections in the DefinitionLibrary settings:
I don‘t know if RhinoCompute is a good way in this scenario. You basically describe a CI/CD pipeline and you could also give each team their own „build server“. Basically a normal PC where Rhino is installed and some form of Runner (e.g. Gitlab Runner or Jenkins Agents). which runs Rhino/GH headless from a corresponding web interface. If you even start to do webrequests, you could also build a Windows service, which does it directly from another PC. Or you basically allow remote access to a virtual user on this dedicated PC and automate as much as you can. But this pay on demand is evil. You deal with a lot of amateurs in regards of cloud computing and a couple of minor mistakes and your costs are going out of control IMHO. The same holds true for your system architecture. It sounds a bit odd that there is so much computing involved. Are you sure your system is well designed?
“You basically describe a CI/CD pipeline” - not sure how you could conclude that from what I wrote.
“Basically a normal PC where Rhino is installed and some sort of runner … which runs Rhino/GH headless” - that is what Rhino Compute is. It’s a headless Rhino/GH instance designed to run on a server with an API that can take requests from anyone with access to the address and ApiKey. And Hops is a component in GH designed to call Rhino Compute.
“odd that there’s so much computing involved” - I don’t think it is considering I’m talking about engineers and architects.
The difference to my proposal is that these solutions are charged differently. You pay for a single license , so there is no need to observe how much each team contributed to the bill. It would not depend on time.
And a CI/CD pipeline is also about creating artifacts. In this regard a final model. I mean I don’t see the use-case here because I don’t know the details. But why do you do all this overhead at all?
Btw I am an engineer and work in a corporation with tens of thousands engineers. Just if you question my authority to answer here
In my observation, engineers in consultancies bill their clients for their time and the resources they use in completing a job, but they can work on multiple concurrent jobs.
Engineers’ time and resource costs need to be split up by project so that the consultancy can charge a client just for the time and resources needed for their particular job.
Therefore, if there’s a shared resource (like a Rhino Compute server) that are used by multiple projects, then the licence and cloud/IT cost to run that server needs to be billed to the various clients, proportionally.
If consultancies can’t find a way to do that, then it might make them reluctant to set up those Rhino Compute servers, which could be a shame as the availability of faster computation could make projects more efficient.