Rebuilding Curves - What's the the point significance?


#22

Just use ICEM or Alias and you’re good to go ; )


#23

Just use ICEM or Alias and you’re good to go ; )

I already do:) But I also use Rhino for a couple of tasks. And I actually don’t wish to copy and paste functionality in order to have third platform. Except for this one, because as I said,you can drastically improve Rhino with minimal effort. And CP Reduction has a very wide range of application in my opinion.
On the other hand, these algorithms aren’t that easy. I tried to code them by myself. An Least Square based and an recursive one, however with only partial success. I haven’t had integrated smoothing yet. If I find time I might upload them on Food4Rhino.
However under the hood, these are very well implement algorithms and I think they are above my understanding and beyond the will of sharing time.


#24

why stop there? just use all three or why not use all available software packages. a good crack or a deep pocketbook and a few extra minutes to get into each software will bring you there i am sure.

and by the way if you keep advertising those then please also under the right tags. i am not sure if all of them are available for mac which is quite bothersome for people who are curious about alternatives but have to endure the ignorance of some smart ass coolios who dont even see where they are posting into and have to run from one disappointment into the next :wink:

@TomTom sounds interesting, will there be a python version or what are you heading for?


#25

Alias is, like Rhino, available for OSX ; )

As far as smartasses are concerned, a look in the mirror might be warranted.

If you cannot accept a forum member’s opinion “This is one of the key functionality I miss in Rhino.” and an answer given in respect to it, you may want to consider climbing down from the tower of condescension.

Have a good day.


#26

yes i know that, but as i said not all suggested software has it.

i have absolutely no problem with opinions, thats probably a personal misconception
from some other fred i presume :wink:

a tower which seems too high for some to ever reach the floor again as it seems.
i can remember my first encountering with you here, calling me unprofessional
because i uttered my own opinion. but well so shall it be we keep throwing at each other
lets go and have a drink and it´ll all sort out again.

:beers:


#27

I somehow feel better now about the error of my ways…:wink:


#28

hopefully not too proud about it :wink:


#29

obfuscated c# :wink: , but I actually don’t know if its done soon, maybe in winter I sacrifice some weekends. But right now, I only want to improve my “real life/computer life”-ratio.


#30

Watch out for zero divisions, bud! :sunglasses:

// Rolf


#31

:crazy_face: i read c# is easy to deobfuscate :wink: anyway if you decide to write it maybe make it platform independent. you will get a lot of extra flowers from the mac users.


#32

:crazy_face: i read c# is easy to deobfuscate

…if not obfuscated. :wink: ILSpy is the tool you need for reversing. It simply reconverts the intermediate language of dot.net applications back into readable code. However if you obfuscate it, you do a bunch of tricks in order to prevent this process, by causing an error in ILSpy, by renaming variables in order to make it unreadable if you manage to reverse it and a bunch of other things to actually make it unusable. Well maybe an open source project might by an alternative. We will see. If I find time I could also start an thread here and everybody is invited in participating. I know both languages very well, but all my previous work is done in c#


#33

I don’t fear zero divisions here, since this ratio is already complex(… with an high imaginary part involved :persevere:).


(Nathan 'jesterKing' Letwory) #34

I wouldn’t bother wasting time in that sort of pretty much useless (and false sense of) security…


#35

Well I’m not so afraid getting reversed by big ones (f.e. Autodesk, Dassault, McNeel), I’m pretty sure they have people and technology smarter than me, if they not already own such features. But you know, from my experience ILSpy makes it way to easy to watch code. And if even semi-professional coders, very common in engineering and cad, are able to copy,paste and understand my code by simply open my libraries in ILSpy. For me its just a second button I have to press, but it prevents 99 % of all coders of watching and more important to understand my code. And even if 10 % are able to open it by other reflection tools, it is still much harder to use in other software, if comments are deleted and variables are named senseless. So I’m more worried of people getting the fame for work I did without even mention my name, rather then big companies stealing and earning money with it. Especially in a world of Grasshopper, where people applying code from other telling all the time they are pros because they randomly connect complex algorithms made by others, not even knowing how it works, who invented it and who actually coded it. Whats better, switching to C/C++, or open source?


Source code, ip, protection
(Nathan 'jesterKing' Letwory) #36

2 posts were split to a new topic: Source code, ip, protection


Source code, ip, protection
#37

You may also want to look at another technique developed by Neil Dodgson at the University of Cambridge. Instead of reducing the number of CP points, he lets the user hide selected points as needed. The user can then move a much reduced set of local points and let his algorithm automatically move any hidden global points as needed. His basic method just uses a knot insertion algorithm, which is closed form and exact without the need for approximations or smoothing. For example, Boehm’s insertion algorithm tells you exactly where to place points during knot insertion so that the underlying surface remains exactly the same. This provides the needed information to determine the hidden point locations interactively during user surface editing.
His paper is entitled: “Can local NURBS refinement be achieved by modifying only the user
interface?”
http://www.sciencedirect.com/science/article/pii/S0010448515001529?via%3Dihub
I made a simple plug-in implementation of this just to test the concept. I think this method has some value and might be a good addition to Rhino, but I think it would be better if McNeel were to incorporate something like this directly within Rhino, so users don’t have to rely on plug-ins for this type of core functionality.


#38

Thanks for the link.


#39

Hello Gary,

thank for this interesting link. This is a great concept and I also think it will improve Rhino if implemented.

However we shouldn’t mix up two different things. Controllability is important, but not only in sense of a local refinement. Hiding would be great to dismiss unimportant cps, but the problem here is indeed that its local. That is what I actually like to prevent, by trying to reapproximate a complex shape into a single-span or lowcount-span shape.

One of the big advantages in staying single span is the fact that you get a global modification, which allows you to fully control the overall curvature. Its nearly impossible to get nice curvature if you only change parts of your shape. So if you choose such approach, you still need something to smooth out . My assumption is that most designers prefer shape quality instead of less deviation. Engineers might judge different. I see approximation as a compromise, with the positive aspect of data reduction. Sure global modification is limited and not suited for every task, sometime there even is no solution. However:

I would always prefer to create shapes as simple as possible, but as precise as needed.

Rebuilding does exactly match the first part of this statement, but ineffectively fulfil the second statement. So people always manually relocate cps after rebuild, which could be better assisted by a “better guess”. This is what I’m aiming for with approximation, and as proven, other platforms already can do, so its nothing impossible.


#40

I think Neil’s algorithm works best when the user starts with the basic Bezier surface to get the global shape and then adds local detail as needed. Since designing is an iterative process, we still need to provide a way for global editing of the type you have mentioned. But if you remove control points, then you will also remove detail. This detail will have to be added back again after you have made your global edit because we are assuming the designer put points there for a reason, otherwise he wasn’t working with the simplest surface to do the job and just made an error. I realize that data reduction is important and that it is useful, but considering the general case, I think we need to keep as many points as needed and let the computer help us when moving them. Neil’s algorithm would have to be expanded if possible to allow for more global edits. We also have other alternatives such as cage editing which could be improved on as well.


#41

I’m not talking about removing detail, I’m talking about removing as much unnecessary cps as possible in order to express all details needed. I doubt that the average designer working with Rhino is capable of representing a desired shape by only as few cps as possible. Most designers don’t even know if they used to much cps. Nobody can tell you this before trying. But many people never think of optimisation at all.
I agree that design is an iterative process, but iterative shouldn’t mean to constantly add complexity. It is also important to always reflect your current process and to optimize the current work. Eliminating over-complexity.
I would even claim that this process of optimisation/perfection is what makes a designer better. For the same reasons, why master painters spend years in doing the same motive over and over again. Well I could argue more and more about this, but there is a trend of simplifying workflow by adding new and fancy features all over the landscape of computational design, instead of improving current system even further. Maybe this is the time we live in, but I can tell you the standard in automotive design is still Bezier representation although there are tremendous power was and is spend in automating thinks by every-one-can-do-it-(parametrically)-solutions. Why are still Bezier surfaces used, made in a absolute manual process? Because its the only way to get the final, high qualitative product. I would even claim, that the best quality can only being achieved with Beziers (or Nurbs with only few spans). I’m not so biased when it comes to single span, but I strongly see the benefit in cp reduction and simplicity.