Background Save in Rhino - possible?

Hi RMA,

Since 2012 Photoshop has a feature called Background Save which lets users continue working on the file while the document is saving. For working with large files it instantly became a huge time saver (you get a progress bar in status bar while saving is going on but all else works as usual).

Is there anything that would prevent having Rhino do the same thing? Again, most of our Rhino files are large and waiting for save (and we save often) adds up to a decent amount of time, daily. Obviously it would also help with AutoSave wait, too.

What do you think ?

–jarek

10 Likes

Yeah! Something like the Rhino-experience on the Mac side :slight_smile:

Philip

Oh, Rhino for Mac can do this already ??

It’s a feature of OS X and MacOS.
It’s not anything special we did in the Mac Rhino development.
I hope Microsoft does something similar in Windows if they can navigate past the possible patent issues.
It’s not a perfect system, but I think it is a big improvement over the old file saving system we have had up until now,

1 Like

Hi John - Photoshop Windows version has it, for a while. Yes, they are very different software but at the core / file writing level I was hoping there are not dissimilar to give us hope for this super useful feature.
Is it a major 'not possible the way Rhino hadles A,B,C, or something doable with some extra development effort ?

1 Like

Since I’m not a developer, I have no way of gauging this.
The change in Mac Rhino happened because of a change in the OS.
If it happens on the Windows side, I suspect it will be a similar thing.
The best use of our development efforts is in relation to surfacing and modeling.
I can’t imagine Rhino users would choose a Photoshop like background saving feature over better meshing, shelling, fillets, Booleans, subdivision tools, etc.

John - Photoshop-like background save feature was only an example of another major software having it implemented and that deals with potentially larger files. Every single user does save. Every single user cares about saving time. That’s why I thought it may be relevant to bring this up. Improving software speed and efficiency in my book is as important as developing the new features.

cheers,

–jarek

1 Like

My hope here is to see if this gains any traction with users. If it does and is generally seen as more important than other things, we certainly will investigate it. Who knows?

Got it, thanks John.

Ok Users : who does not want to wait while the file is saving ? :wink:

( always wondering if you guys count ‘likes’ - seems like not the best way in this forum to vote for supporting some features…)

1 Like

Me for one. I am currently working with large (~0.5 to 1 Gb) terrain files for milling and what’s worse is that I have to save them to a network drive. I have a good save reflex, but every time I do it takes like 90 seconds or more to save. And I am working with some crash prone stuff, so I like to save often, but it’s just not possible. Lost about a half hour’s work yesterday.

–Mitch

You’ve asked a misdirecting question.

I agree. no one wants to ever be inconvenienced by saving.
But, on the scale of what might be possible, how important is that, and what would you be willing to put off to get it?

In other words, what are the three most important improvements we could make to Rhino that would make it work better for you?

I always admire how McNeel handles the improvements in parallel tracks of big new features (you name it - SubD, better Mesh handling, improved Vport display quality etc.) and the little ones that come as suggestions from users dealing with Rhino nuances for 10-12 hours a day, everyday, and over time make Rhino become very reliable and well thought-of product. I would think that the Background Save suggestion falls into the second category. Personally the small improvements often come first for me as I always prefer to have solid, stable and stramlined product vs. a lot of new stuff that does not work well. Not saying that we don’t need to / want to big new things in Rhino, but should not use them as a trade-off for small improvement suggestions. Not being a developer myself I am trying to gauge if it is even possible with Rhino and if so, is it as big of a project as new mesher or not that major effort. Everyone works differently and would name very different improvements they need, but in this case, all of us save.

–jarek

3 Likes

So there’s two things here that are possible factors. One is the time it takes to actually convert the loaded model to a stream of bytes that would represent a valid 3dm file, and the other is the time it takes to write those bytes to the target location.

It would be (from a programmatic point of view) reasonably straight-forward to speed up the second problem. A file could be written to memory first (or even a RAM disc or something), which would be very fast, and then from there it would be copied slowly onto a drive or USB stick or whatever. Such an optimisation might make a huge difference if the target volume is slow. However modern hard-drives are pretty darn fast when it comes to pumping bytes so for most people, in most cases, this wouldn’t matter at all.

The first problem affects everyone equally. It just takes time to serialize runtime objects into byte arrays, and the more objects you have the more time it will take. If the serialization is supposed to happen in the background, then the obvious problem to be overcome here is that you might be changing the data that is supposed to be saved.

If you move object #854 in the model before the save process gets around to serializing it, should it just save the new object location, should it try to get the original object out of the undo buffer, is there a third option?


begin meandering rant:

I’ve been thinking about improving saving times for GH2, especially since Grasshopper has an autosave feature which potentially runs very often. even if it takes a relatively short time -say 1 second- to autosave a file, that still means a one second delay every time you add a component or change a wire or start dragging a slider. I would very much like to try the following approaches:

  1. Compose the file in memory and write to disk in a background thread (see above).
  • Throttle the compression level during autosave, resulting in larger files that are nevertheless composed faster. In fact the compression stage can happen entirely in a background thread because the file data is already known at this point.
  • Figure out a way to cache those bits of a file that take a long time to serialize, for example a list of 1000 internalised Breps. I only have to serialize them once and then reuse those bytes until the list changes, which is unlikely to happen often for internalised data. This is especially good news since internalised data is by far the greatest bottleneck in a save operation.
  • Figure out a way to only save changes. This could potentially results in really small files, but it also makes the recovery process less robust, not to mention all the autosave files would need to be kept in order to redo all the changes since the last proper save.

Without having written or profiled this in the slightest, I suspect that a combination of (1) and (3) is going to yield the best return on investment.

1 Like

Hi David,

thanks for the in-depth explanation and ideas how this may work - I understand that none of that is currently happening when saving Rhino file, but there is a potential and room for improvement.

I have fast SSD drive and even when saving locally the 300-500 MB files that are typical with our workflow take a while to save. In Photoshop after pressing Ctrl+S you can continue as if nothing happened. This would be the best from the user standpoint.

The third option may be a prompt / message if any change happens to not-yet-saved objects, similar to breaking the history, so we can decide if it should use the new version or abandon the change and wait.

I like the ‘write all to memory, quick’ approach so then the writing to HD or Net drive can happen from there in the background.

Hope this can be considered in future Rhino improvements. I didn’t even think about it before it was introduced back then in PS, but now every software that does NOT do it seems to be slowing us down a bit :wink:

–jarek

1 Like

“Never optimize before measuring”

As it turns out on the Mac, (1) and (2) are the most important and make file saves fast enough that the hard work of implementing (3) and (4) are unnecessary. The Mac file saving framework from Apple provides most of the scaffolding for (1). Mac Rhino writes the 3DM file to a memory buffer and Apple code takes care of writing that to disk in a background thread. This makes it possible to work on a network drive or even a thumb drive and not notice any delays. Compressing data while creating a 3DM memory file takes 90% of the elapsed time, so turning off data compression (item 2) is a huge win.

Apple’s Auto Save has a couple more features that make it invisible. Their Auto Save code waits until the application is idle - no mouse or keyboard activity at all for 15 - 20 seconds. It then asks the application if it is OK to Auto Save. If Mac Rhino is in the middle of a command, for example, Rhino answers no. If Rhino answers yes, the save to memory portion usually takes under a second, and, because Rhino is already idle, it almost always happens when Rhino continues to be idle. The background write to disk can take as long as it wants.

Apple put a lot of work into developing the Auto Save mechanism, and it still took them a year to get it working right in the field. There are lots of tiny details to get right. This would be a pretty big task to implement from scratch and get working perfectly.

2 Likes

Hi Jarek

I see that John, David and Marlin already answered your question… Yeah, it’s nice when you never have to wait for a save.
Another - even greater advantage of OS X - is ‘versions’, which I see as true modelling history. These two big features (and many smaller ones) adds to the positive Mac Rhino-experience (compared to Win Rhino - which I also love, though) :slight_smile:

Philip

I think this is an excellent request. As my projects go on, file sizes get larger as iterations increase in complexity. With autosave set to every 20 minutes, the machine will hang for up to 30 seconds while writing to SSD. That means that my concentration is broken three times per hour, and I loose an average of a minute and a half of productivity, assuming I log back on like a machine when the UI becomes active again. Would I take a CPU that was 1/30th slower but had no breaks in continuity, no stuttering? You better believe it. I’m not about to switch software over the problem, but a 1/30th cut in productivity of a given piece of software certainly is significant, and rides tirelessly over all other efficiency efforts made by McNeel’s diligent software engineers.

I think referring to this issue as a feature on par with better meshing/shelling etc is doing the OP a disservice. This software is great, been using it for years, but when files get big it doesn’t matter how good the meshing or shelling is, I am doing NOTHING on this file for at least 1/30 of my time during the latter phase of projects when time pressure is more intense. If this software was a Cadillac, the smooshiest suspension and best AC wouldn’t completely hide the fact that the car shifts itself to neutral for 30 seconds three times an hour. The more I think about it, the more ridiculous it is that we have been living with it. To hear that the Mac version has this functionality vaults it to the most valuable feature differentiation in favor of the Mac version for me, but it’s not nearly enough to switch unfortunately (this typed via a bootcamped MBP).

Very good point and I forgot to mention this in the original post: apart from time lost for waiting at each save/autosave, it often becomes a distraction/loss of focus. With Bacground Save, one can stay ‘in the zone’ and cruise along with the task with no interruptions, barely noticing the habitual Ctrl+S press.
Long save/autosave wait also often gets in the way of real-time client presentations or even in-house discussions and training sessions…

So - a few more reasons to consider implementing it in Rhino.

  1. Given the biggest files without some large object internalized within them is ~3000 kb (from all my files ever), I don’t think it has any significance if you’re using MemoryStream of FileStream.

  2. Never did anything with compression, but with such small files is it worth the effort?.

  3. Recently learned that the default binary serializer in .NET is really slow. Switching to the google protobuf library for serialization speeds up everything up to 100 times. The fastest native .NET way to “serialize” so far is the buffer.blockcopy which in my case was 2x as fast as protobufs. Some benchmarks.

  4. This is a really nice idea which links to one of the recent gh forum discussion on versioning. If the whole definition can be stored as an xml file, isn’t that “kind of” straightforward ?

If the file is small compression should be fast, if the file is large compression should be worthwhile. So I don’t see a reason to not use compression.

Are you talking about the Writer and Stream classes, or the framework ISerializable mechanism? [EDIT: the link you provided answers that question.]

Xml has it’s benefits, but there are also huge drawbacks. Amongst its problems are such elements as:

  1. Xml files are waaaay larger than binary files.
  2. Numbers in Xml are culture dependent (comma vs. period). This makes it difficult to exchange xml between computers running different locales.
  3. Lack of type-safety.
  4. Xml is liable to get munged by web-browsers and e-mail clients. Before *.gh became the default instead of *.ghx, I used to get a lot of tech-support from people whose ghx files were no longer valid due to some weird changes made by some software which handled the files. I can usually repair them manually, but it’s not a lot of fun.
  5. An almost fanatical devotion to the Pope.
1 Like