Hops flask server on heroku timeout

Hi there,

We are developing a hops flask server on heroku, and everything is working fine locally (with all sizes of data) and remotely with small size data. When we use a greater amount of data, the app displays an h12 request timeout error.

Heroku server seems to be limited to 30 sec and it’s not possible to set gunicorn to a higher time delta. The workarounds are: to make the program more efficient or to use background tasks.

Is the Hops component able to process background tasks? That asynchronous options in the component is anyhow related to asynchronous server requests or only related to the freeze of the Grasshopper screen?

The asynchronous option in hops is only for not freezing the grasshopper screen. This option still calls a server and waits for a response, but does so on a different thread than the main UI thread.

Here’s something you may want to try.

  • Create a function that receives data, stores it in a dictionary on your server and then returns a unique id.
  • Create a second function that receives the unique id instead of all of the data. The data can then be accessed from the the dictionary and processed in this second function.
  • Chain the two hops components together to break your server interaction into two separate steps.

If that doesn’t work then we’ll have to modify Hops to work in a different way with the server.

Hi Steve,

Thankyou for the replay. Your strategy is to avoid that the time used to upload data may impact on the overall time of the application, right?
Reading this document (Request Timeout | Heroku Dev Center) , particularly this part:
“The countdown for this 30 second timeout begins after the entire request (all request headers and, if applicable, the request body) has been sent from the router to the dyno. The request must then be processed in the dyno by your application, and a response delivered back to the router, within 30 seconds to avoid the timeout.”

Does hops handles the upload of data as part of the request or as a separated process?

About uploading big files they suggest using amazon s3 buckets instead:

" Many web applications allow users to upload files. When these files are large, or the user is on a slow internet connection, the upload can take longer than 30 seconds. For some types of web applications that block requests, it can result in hitting the H12 request timeout. For this we recommend directly uploading to S3."

Then they suggest moving the processing to a background task:

“If your server requires longer than 30 seconds to complete a given request, we recommend moving that work to a background task or worker to periodically ping your server to see if the processing request has been finished. This pattern frees your web processes up to do more work, and decreases overall application response times.”

Witch I’m not sure is compatible with Hops.

But maybe Heroku is not the right server for our job. We can use AWS. If we have an EC2 with a rhino 7 licenced installation on it, could we possibly get better results with the rhino.compute inside Cpython in this platform?

Best regards,

Fernando