So thereâs two things here that are possible factors. One is the time it takes to actually convert the loaded model to a stream of bytes that would represent a valid 3dm file, and the other is the time it takes to write those bytes to the target location.
It would be (from a programmatic point of view) reasonably straight-forward to speed up the second problem. A file could be written to memory first (or even a RAM disc or something), which would be very fast, and then from there it would be copied slowly onto a drive or USB stick or whatever. Such an optimisation might make a huge difference if the target volume is slow. However modern hard-drives are pretty darn fast when it comes to pumping bytes so for most people, in most cases, this wouldnât matter at all.
The first problem affects everyone equally. It just takes time to serialize runtime objects into byte arrays, and the more objects you have the more time it will take. If the serialization is supposed to happen in the background, then the obvious problem to be overcome here is that you might be changing the data that is supposed to be saved.
If you move object #854 in the model before the save process gets around to serializing it, should it just save the new object location, should it try to get the original object out of the undo buffer, is there a third option?
begin meandering rant:
Iâve been thinking about improving saving times for GH2, especially since Grasshopper has an autosave feature which potentially runs very often. even if it takes a relatively short time -say 1 second- to autosave a file, that still means a one second delay every time you add a component or change a wire or start dragging a slider. I would very much like to try the following approaches:
- Compose the file in memory and write to disk in a background thread (see above).
- Throttle the compression level during autosave, resulting in larger files that are nevertheless composed faster. In fact the compression stage can happen entirely in a background thread because the file data is already known at this point.
- Figure out a way to cache those bits of a file that take a long time to serialize, for example a list of 1000 internalised Breps. I only have to serialize them once and then reuse those bytes until the list changes, which is unlikely to happen often for internalised data. This is especially good news since internalised data is by far the greatest bottleneck in a save operation.
- Figure out a way to only save changes. This could potentially results in really small files, but it also makes the recovery process less robust, not to mention all the autosave files would need to be kept in order to redo all the changes since the last proper save.
Without having written or profiled this in the slightest, I suspect that a combination of (1) and (3) is going to yield the best return on investment.