I’m not sure I understand what you’re doing… Why are you “eliminating” them from the main file? As far as I can tell, the main file is simply made up of inserted linked blocks. Therefore, if you change the contents of the files that are being linked, then the results should show up in the main file.
For example: I open up file “.750-16-Hexhead-1.5inL.3dm”, I modify things however I want (i.e. mesh settings, etc), I save the file… I then open up the main file, and every object that is linked to “.750-16-Hexhead-1.5inL.3dm” will now show the changes I made.
What is it you’re trying to delete in the main file? Tell me exactly what you’re doing…
We’ve talked internally about things like this…something like keeping track of multiple face lists and switching between them based on LOD settings and current frame rates… However, I’m not sure when we’ll be able to get to it. It’s not a simple matter of just drawing every 'Nth" vertex…because you still want to maintain the mesh topology, otherwise your meshes could possibly only display a small portion of the object, thereby losing the visual structure, since the order of vertices does not define the order of the topology…if you know what I mean.
@nowforge, is there any chance you can make the files assembly you have uploaded for Jeff available for others (Dropbox, etc.) ? Would be easier to test out and give meaningful advice for now, based on what Rhino currently has to offer (still hoping for performance improvements and fixes in future).
I deal with very large and complex files on a regular basis in architectural scale and there may be some tricks to try.
Just a thought: how about utilizing the current Reducemesh functionality to create low-detail meshes? They should probably be pre calculated and stored somewhere. I suppose not on the object in the document as that would increase file size … or if Reducemesh never creates new vertices, low detail meshes could be re-using the base vertices and their attributes with only a new set of reduced faces making it feasable to store 2 meshes (high and low quality) on one object without too much overhead…?
That will probably be much more complex than it seems …
-Willem
Another thought: is drawing colored pointclouds less costly than meshes with equal vertices? If so, maybe it’s another way to increase speed. Each rendermesh has ‘special’ vertexcolors, when far away and/or when transforming a view only the colored vertices are drawn. …
As Willem sais, using reducemesh, even though it is slow, or reusing the vertexes to calculate a new mesh based on it, is what I envision too. It would take time to calculate, but it could be an option, or a background process. And maybe it should be stored in the file, or not, thus recalculated at file-open.
When I delete an object in a 2nd level linked file, it still is listed
in the main file. This is the problem … I cannot seem to eliminate
the objects that are “slowing down” the main file. I cannot understand
why the changes in a third level nested linked file do not cascade to
the main file.
I am working on assembling another file of objects to show the problems
being experienced with BlockManager. We have a file that is not that
large (approx 1 GB) but the nesting is intricate (reaches 4 levels). We
find that BlockManager freezes and we have to exit Rhino5 (shut
everything down). This file will not work on any of our machines.
I built a workstation 18 months ago that included a then new AMD Firepro W7000. This card appears to work pretty darn fast and with great reliability. It is also dead quiet which would have been otherwise noticeable in my system which runs water cooling and 15 fans and low speed for both cooling and quiet. I7-3930K @ 4.1 GHZ, 32 GB ram 2 ssd’s in raid 0; two cloned mech 1 TB HDs for backup. Quit fast and quiet. My temps average on all cores at 100% about 58 deg C in summer conditions and a few degs cooler in winter. But that W7000 is quite a nice card. Cheers, Rob Ladd
I have been told that there are problems with the Firepro W7000
regarding long term reliability (that is the last thing we need in a
production environment!). We are using I7-2600 @3.4 ghz, 32 GB ram
(standard cooling ..... no issues). Our research indicates that the
linked block addressing algorithms are causing the "road block".
With a 2 GB file, there is more than enough RAM overhead ....... CPU
processing seems to be the problem.
As I stated earlier, we actually get better performance with a
GeForce 200 with 512 MB vs a GeForce 660 with 3 GB DDR5. This seems
to indicate that the slowdown happens before the graphics card .....
we are still looking for a solution.
Talk to you soon
Ed
haven’t heard that there have been long term problems; I’ve had mine for a
year and a half without any problems. When it comes to Open CL work, it’s
hard to beat, in all other venues it seems to perform somewhere between a Quadro
K4000 and K5000. In that regard and because of it’s recommendations from
others in the industry, I sprung for it and am happy I did. I am anxious
to see the new AMD W7100 that is base on a new AMD processer. Might be
interesting??? But then, it appears that any decent card whether it be
workstation or gaming is sufficient for work in Rhino and even Flamingo where
it’s processor speed and for rendering that along with number of cores.
Thanks for your input and look forward to carrying on the conversation.