Rate limit exceeded

We are using Shapediver with the Backend API to create 3D Models of our parametric products from Ninox. It was all working fine until a few days ago when our account expired. We renewed the account and in the dashboard it shows we have only used 16 out of 10.000 available credits. Yet if we run the API we get an error message saying “Rate limit exceeded”.

How can this be? Did you change anything in the way the backend API works?

Could you tell me which ShapeDiver account is this?

hansen@hansen.ch

This error doesn’t refer to credit consumption but to a number of requests from your application. ShapeDiver sends 429 HTTP status code and message “rate limit exceeded” to tell the consumer of the API to throttle the requests. It’s a feature to maintain the best performance of our infrastructure and not a bug. You need to handle the number of requests in your client application to avoid this problem.

Oh, I see. We send out the requests all at once I guess. What are acceptable rate limits?

There is no magic number but the common practice when processing batches of jobs via ShapeDiver or any API is:

  1. keep a queue of jobs
  2. pick the next job and submit it
  3. in case of error 429 “rate limit exceeded”, wait keeping the computation time in mind and try again (if necessary several times)
  4. repeat from step 2

In your first answer you said we can send all at once and it would just take a long time. Since you edited your answer I guess that wasn’t correct (and wouldn’t have made sense).

Now you are saying we have to send them all serially, which makes a gigantic difference. It would imply that Shapediver isn’t actually built to be used by many people at once. Say I have a Shapediver viewer embedded on my website and I get a hundred people at once trying to use it, you make it sound like that would be a problem. If it isn’t a problem, then I don’t understand why its a problem for the API. It’s not like it is a constant stream of requests. We have a bulk process when creating products that runs through a cue of say 1.000 requests once about once a month.

I am still a bit confused that there is no concrete number, like with other APIs I have used (like Google’s APIs that have very clear usage limits).

Exactly, that answer wasn’t precise.

Regardless of the API you are developing a client application for, you always have to handle the case that your request might be rejected with a 429 HTTP status code due to rate limiting (or other HTTP status codes). That’s just how REST APIs work, and needs to be taken care of in the application consuming the API.

Pavol was giving a hint to a common practice method to deal with rate limiting on the client side. If you search for this topic you will find a lot of good input. Example:

  1. set wait_time to some initial value (e.g. the typical computation time of your model)
  2. send request
  3. in case of 429, wait for wait_time, increase wait_time (typically using exponential growth), go to step 2

If you properly implement this kind of common practice handling, you can try to send all of your say 1000 computation requests at the same time, but it’s not very clever to do so. Likely most of them will be rejected with 429 several times, resulting in a waste of resources on both sides.

This is not what Pavol said. There seems to be a misunderstanding of the algorithm he explained.

Your conclusion is not correct. Rate limits are being put in place in order to deal with exactly this situation. Also they force you to implement your client application in a proper way.

We might publish such numbers at some point, but in fact they have no influence on the methodology to deal with rate limits client-side.

Please let me know if you have any further questions or comments.