We are working with a server where we get over 100MB transfer speeds but saving the 240 MB document from within Rhino takes 25 seconds. If I save it to the local SSD under a new name and then copy that file to server it only takes 2 seconds, so the write speed is good. The computers are linked through a Gigabit switch.
Could Rhino write the file to memory prior to saving it to the network location to help speed this up?
I know it is a problem for users who work on wireless lans as well.
I was more thinking in the manner of saving to the scratchdisk (SSD) and then move the file to the desired destination. I can make a proof of concept python script that does that, but that wouldn’t be a good solution as the “last files used” list wouldn’t be updated among other things.
you actually say it takes more time to store directly over network than it would do saving to ssd and then copying?
the extra loss of time can occur of course because it probably sends packages there and back for parity reason. so if you want rhino to save it to a server with saving it first to ssd and then copying it if thats faster then you may have to make a script.
Or vanilla Rhino has a saving process that actively prevents unnecessary data transfer over the network:
Get the file ready locally and send it to the network location in one go.