Is there a way to display 2d drawing in a Human UI window except using reading in a JPG?
I’m not a big fan of Human UI, BTW, this could be a way?..HumanUI_Create Shapes.gh (15.5 KB)
@andheum it seem’s that create shape is only able to draw things that are +X and -Y. so that the top left corner is 0,0 under Rhino but it would be usefull to show a graph (mathematical for example) centre on 0,0 to be able to see what’s -X&-Y -X&+Y and +X&+Y, would it be possible to decide the center of the all shape or that autosize could be able to get -X and +Y domains as well??
sorry you don’t like it. care to elaborate why?
the create shape(s) follow the WPF convention of drawing from the upper left. In order to avoid a vertical flip, I use the +X,-Y quadrant (and invert Y) so that orientation is preserved (up in rhino is up in the window). In order for shape positioning to remain consistent this coordinate transform is always the same, rather than recalculated on the basis of a bounding box calculation, so it is left to the user to reposition geometry - if I did not do this, a changing shape would not seem to stay anchored to a common location.
Thanks KIM. Very nice example!
There is a mystery. The latest available Grasshopper is: 0.9.0076.
When I open your definition I get an I/O alert, because is says it was written by 1.0.0004
How is it possible?
The Grasshopper that is included in the current RH6 WIP is 1.0.0004.
Human UI is great in the sense that the result is sometimes beyond expectations.
It can be very useful especially if there are many presentations for clients. It even reduces the amount of graphic works (using Photoshop or Illustrator …).
But just because I don’t have a lot of client presentations, so, often I forget how to use it.
IMHO, somtimes the amount of work for constructing a definition for HumanUI is larger than expected and the workflow is not quite intuitive.
In particular, a bit annoying to me is that it is necessary to generate UI input and UI output separately, so that a feedback between GH definition and generated window is possible…
Fine tuning of the final window also requires considerable experience.
Hope HumanUI to be more simple and intuitive.
Thanks for the explanation, that didn’t seems obvious for me as it wasn’t explain in the component definition, I’ve just realize this thanks to HS_Kim demo… Anyway knowing that i agree that user could make his own transformation to fit the WPF but in some case (mine for example with righting arm stability curve “GZ curve”) the user doesn’t know the max Y and min Y and could set -X and +X depending on his analysis, a shape depending on a center point or the bounding box itself would have been easier to set.
And in my case, I’m a BIG fan of Human HI even if i agree with HS_Kim for the complexity of setting a definition and for the unclusterize problem which would allows users to hide and protect their definition.
These are all valid critiques. I hope to significantly streamline the workflow in HUI 2!
I just tried to load this GH and got the following error ->
Message: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information.
I closed Rhino -> unblocked the file re-launched and got the same error. I did review the MS link but could not make much sense of it. Latest beta and Human UI on up to date Win 10 x64.
I’m not sure but if you’re running RH on the network, how about testing it on a standalone machine?
Thanks for the input. I am running as standalone. Not referencing any network resources. The script seems to be running OK now but the error persists when launching.
Can’t wait for the version 2.0