Rhino Inside Python - on Windows Server, core-hour billing

If we want to use Inside on Windows Server, licensing is handled by the core-hour billing server. Rhino 7 and Windows Server - #3 by brian

For Rhino Compute I use Util.apiKey or Util.authToken to provide authentication.

What is the Inside equivalent? How do I use core-hour billing with Inside?

We have Rhino 7.10 installed on an AWS Workspaces Windows Server instance. This machine has no access to a license, but the 7.10 install appears to have no complaints about running on Windows Server (unlike 7.8). It loads fine up to the point where it needs a license, then informs that it needs one.

The most recent attempt to get Inside working on this machine with core-hour billing looks like this:

import rhinoinside
import System as sys
sys.Environment.SetEnvironmentVariable( 'RHINO_TOKEN', <redacted>, 1) #'Machine' )
import Rhino as r

But rhinoinside.load() fails with this (same result if the third param for the sys call is 0 instead of 1):

The hurdle appears to be purely about licensing, everything else seems fine?
The auth token we’re using is the same one we use for core-hour billing for Compute.

If the third param is 2 (i.e. ‘Machine’) as recommended it fails at that line with the following:

@will can you please help with Amazon Workspaces startup?

@fergus.hudson what is your intended use case for Rhino.Inside on this machine? Will you be using Rhino interactively, or as a web service?

@brian as a web service. Users will send jobs to the platform, and Rhino Inside workers will process them and return results. We want to be able to run many workers in parallel, so don’t want the limitations of licenses, we’d like to make use of core-hour billing. We’re looking to Inside because we couldn’t achieve the performance we needed with Compute, even when batching calls.

@will if you could point us at the path to getting Inside to do this I’d be extremely grateful!

@fergus.hudson, try setting the token with os.environ instead.

os.environ['RHINO_TOKEN'] = '<token>'

Or configure it via powershell before running the python script…

[System.Environment]::SetEnvironmentVariable('RHINO_TOKEN', '<token>')

I don’t think Amazon WorkSpaces is the right platform for a web service. You might consider EC2 instead.

@will thanks very much for that. Unfortunately it didn’t work though.

rhinoinside.load() isn’t successful:

================================================ ERRORS ================================================
__________________ ERROR collecting Share/test/testing/fixtures/test_inside.py __________________
Share\test\testing\fixtures\test_inside.py:14: in <module>
    import Share.test.create as script
Share\test\create.py:9: in <module>
    import Share.test.library as l
Share\test\library.py:5: in <module>
.env\lib\site-packages\rhinoinside\__init__.py:43: in load
    __rhino_core = Rhino.Runtime.InProcess.RhinoCore()
E   System.Runtime.InteropServices.COMException: Error HRESULT E_FAIL has been returned from a call to a
COM component.
E      at Rhino.Runtime.InProcess.RhinoCore.InternalStartup(Int32 argc, String[] argv, StartupInfo& info, IntPtr hostWnd)
E      at Rhino.Runtime.InProcess.RhinoCore..ctor(String[] args, WindowStyle windowStyle, IntPtr hostWnd)

That’s interesting - there’s not a lot of middleware in there. Were you able to determine what was causing the latency? What are you doing differently with Rhino.Inside that is faster?

I’ll let Will keep answering on getting the token working.

My best guess is that there’s something wrong with the token or the way it’s being set. If you try to run rhino inside.load() without setting the RHINO_TOKEN environement variable, you should see the following message printed to the console…

Rhino is not supported on Windows Server. To run Compute on Windows Server, see https://www.rhino3d.com/compute.

Since I’m not seeing that in your output, the RHINO_TOKEN env var must be present but perhaps it isn’t valid.

Can you trying getting the token again and running this simple script?

import rhinoinside
import os

os.environ['RHINO_TOKEN'] = '<token>' # replace me!

import Rhino
1 Like

Thanks Will! Generating a new token was the answer, our code works fine with that.

We aren’t planning to run a service from Workspaces, we were just using it to make sure we could run Inside on an AWS instance.

We did a bunch of benchmarking with a suite/script we put together ourselves after struggling to get our services to run in times that would be acceptable to users.

When using a localhost compute server our unbatched time was 74 seconds, batched was 29.
When running the same benchmark with inside our time was 11 seconds.

When we ran it with compute remotely (workers and compute server both hosted in AWS) times were much worse (425 seconds for unbatched).

Even if that ratio of performance ratio of ideal compute to inside (29 to 11) was acceptable, compute would still be unattractive for us because:

  • Our (current) process is fairly linear in most places (our benchmark is much better suited to batching than our actual code is) and I estimate if we batched every call in the system we might double performance relative to unbatched (i.e. we’d go from 425 to 212), which isn’t good enough
  • Batching calls is painful, for two reasons: batched calls will not run on arrays of length 1, and batched calls fail entirely if any single value in an input array is None. So we can’t simply write code that steps through a series of API calls, passing numpy arrays between each and then just look at the end results, we need to carefully curate the entire process, checking for errors every step, removing individual values, packing the arrays back to length, etc, etc. It’s quite exhausting to convert to batched with code of any significant size.

Our api calls are all in a library separate from our scripts, so to switch from compute to inside we just replace the library, 95% of our code remains identical, maybe 99.9% in our benchmark, so I don’t believe it’s a case of us doing things differently.

Anecdotally a script I’ve been working on in recent weeks went from taking approximately 1 minute to run on a localhost compute server to approximately 1 second with inside. We’re not looking forward to having to use Windows workers instead of Linux, but performance gains like this open up a whole world of optimisation possibilities in the designs our system creates.

Would you be willing to meet with us to show us what you’re seeing, and maybe a bit more details between the two setups? I’m really shocked to see such huge differences in timing.

Are you using Rhino Inside CPython for our solution? I wonder if just running CPython vs IronPython (in Rhino) is making up the bulk of the difference?

I’ll email you Brian.

1 Like