Thx Jim for the replay, and jumping into the discussion. I’m just trying to help development giving user feedback (be patient, my English is weak).
To be on the safe side set your units and tolerances so that you’re using maybe 7 or 8 max.
That is the problem I want to point out.
which is ludicrous, but it does ensure fillets
And by fillets, you also mean a lot of other Rhino functions.
Sometimes we need to increase rhino tolerances for rhino reasons when we do not need it in our project (example car designer that will make 1:5 clay model, a game developer, an RC model, etc…). What I usually do is to have a second Rhino open and make some operation on different scale or tolerance. This is because these objects are way too small or too big for the project. An example can be working on the fillet, trimming and shrinking a bolt for inside a 32m sailplane glider project. I’m making the airfoil profile and the bolt is very important. Another example was my Ferrari shape (Rhino3) by handling the engine parts where each millimetre counts. Making a 150-foot sailboat and having a very precise blade airfoil, etc…
There is always a solution and if looks like there is a limit then look for a new solution 90° to the problem in a new plane dimension.
I will try to put down some example and (in my ignorance on how Rhino is coded) some solution ideas. CPU power is not a limit. I can imagine Rhino7 running in 64 cores in idle and cloud services offer and waiting for task. Or you can make better calculations accuracy on the background. For example, 64bit is not the limit, some space game you can combine two doubles or two simple floats to increase accuracy or precision (but I do not think that is a good solution here).
Problem example:
- I have a ship and I need to convert to
Mesh
poligoms the settings and result are way very different than the nuts that are in my same project. - To make an intersection, fillet the object, the object can´t be too little.
- Functions has a comfort zone. If I export IGES I need to shrink them becuse Alias sometimes do not pick up the trim. The problem is that. If I shrink and I give me overlapping points, I carry without noticing that in all the project pipeline. And in the end, I need to redo all trims and model because of that. As to the forum. A lot of forum threads are fixed THANKS Peter and you tanking user novice time and expert replay time.
- I do not need big tolerance in my game but I need to add a lot of precision in Rhino to make it work. This must be done automatically by Rhino and not by the user that more often is inexpert of how that particular function works. And later when I export I have too many digits (including in smaller LOD) in the transform that reduce my game frame rate and increase the creation time and file size to ship. Sometimes I finish opening the file and take all redundant them manually.
- Scale up or down automatically for each individual functions. You need to be an expert to set up big object and is a completely different setting (that you can’t save) and experience form making the mesh of a very small object (fillet of the nut).
- Making a space ship for game development in Rhino: I use Maya and Alias and Rhino is perfect for the task of modelling beautiful space ship but hast this limitation when I go down making the nuts or hangar container. Finish making in separate projects.
- If I increase the tolerance Rhino put a lot of points inside intersections. Big objects became complex and heavy for no reason. Usually in the industry bigger is the objectless tolerance you need. Making a decision only when starting the project is a problem. Some project will be approved and others will not. But is not possible to simply to change tolerance in the way.
What I mean is that all that 64bit space is not optimised. And millennials can loos time fighting for a trim.
Some possible solutions:
- Use cores to do the work in the background in much greater precision. More or less how HD work
- Simplify visual representation version that is provided to the user via screen always perfect no matter the size. So Rhino works faster with the user as a user interface and does de hard work in the background with rhino different scale tolerance.
- Increase the level of success. Study how many times a function is called and fail in the execution and increase the success rate.
- Quality control: Avoid giving to the user functions that do not succeed perfectly in the execution or expectation. The solution can be that rhino scale the object up or down to fit inside his executing comfort zone. Or make a visual red indication when the object is not in tolerance ( for example overlapping points of the same curve or surface mark them in red)
- Consider using bigger tolerance internally so that we can increase or decrease in a non-destructive way when exporting. Substance Painter does this for textures: It works internally in 8K; you work in 1K and at the end, you scale it up if you need to.
- Change the tolerance of individual objects. Smaller objects need more tolerance. For some group, layer or having a new window drag and drop system specifically where you can set specific execution tolerance. But is better if Rhino does this automatically for us. And only when we export it asks us with how many digits we want to export.
- To Mesh command can be solve using Grasshopper but the setting and result look like different (better-using Mesh). Consider improving the (tolerance) of Mesh command interface for game developers and external rendering.