429 Too Many Requests with not so many users

Hello,

We are using direct embedding integration for one of our sites spoiler[/spoiler] (sdr7euc1 — Rhino 7, shared Geometry Backend), and started experiencing 429 Too Many Requests with the really small number of concurrent testers (~5 people).

We experienced this before with our other project, that’s why we’ve already added a delay for requests to avoid very frequent requests from one client. And this fixed the issue for this other project, but in the current one it did not. We assume that it potentially may come from Starter plan limitations.

We don’t want to switch to a higher plan, since current number of concurrent users is really small to plan an upgrade, so trying to explore possibilities to fix that differently.

Some more details:

  1. We do not experience this error if one user makes lots of fast changes, it’s only when we have concurrent users.
  2. Our current test model has both configuration and export in one — we will separate them into two different models, as has been advised in the other project.
  3. Our stack is Vue (Nuxt) on the frontend, and Strapi (Node.js) on the backend.

Questions:

  1. During our research, we tried to understand how limits work — is it correct that limits can affect our users when Shapediver sees the spike of sessions across many clients (not only us)? Or are they set per client?
  2. How do these limits apply exactly? X requests per period, refreshing every Y seconds or at the start of every min/hr etc.?
  3. If this is a common issue, what is the optimal solution at the moment, to not block users and allow at least 30-50 concurrent sessions?

Thanks!

Which version of our viewer are you using? Probably it’s version 3, but I want to make sure. Version 3 transparently handles rate-limiting for you.

About the rate limiting you are seeing: Our backend will send 429 rate limit replies (using an appropriate Retry-After header), if the rate of requests coming in for your model is higher than the rate at which the requests can be processed. This processing rate depends on various factors, like the type of your account, and the general load of the system. It does not depend on number of sessions or users. Of course, the computation time of your model also strongly influences the processing rate.

In case you are using viewer version 3 or our TypeScript SDK, rate limiting is transparently handled for you. In case you should be calling our API directly, it’s important to handle rate limit replies and retry according to the Retry-After header.

Yes, it is version 3 (screenshot).

This processing rate depends on various factors, like the type of your account, and the general load of the system. It does not depend on number of sessions or users.

Does dependance on ‘the general load of the system’ mean that if other clients’ integrations use lots of requests at this moment, it influences our configurator as well? So we should not think about putting our users in queue etc. to distribute the load?

As long as you are using one of our shared systems (Free, Starter, Business accounts), the cloud resources used for computing your Grasshopper models are shared with other accounts. The systems autoscale depending on the load. However, temporary fluctuations in the bandwidth are still possible.

You mentioned that you experienced “errors”: Are you referring to the messages in the browser console, or did you actually experience exceptions being thrown by our viewer?

What is the average computation time of your model?