I have a double curved surface that I split into hexagonal cells using lunchbox. While using eval surface component in order to construct normal at centre of each hexagon, all centres co-incide with the larger surface centre. Grafting, flattening the multiple hexagonal surfaces doesn’t affect the output.

I am posting a comparison between rectangular divisions versus hexagonal divisions. I am not able to comprehend the fact that in both the cases, flattened output of multiple surfaces is taken out which is being fed into the eval surface component. Why do rectangular splits generate centre points and why do all hexagonal surface generate wrong centre points?

P.S. Pointless use of Relay is a killer for R5 compatibility.
P.P.S. I know where to find LunchBox but have no intention of installing it for this. You could internalize its output…

You are assuming that (u,v) = (0.5, 0.5) always indicates the centre-point of a surface. It doesn’t.

When you split a surface using the IsoTrim component it actually trims the surface at the given domain. There are no ‘invisible’ parts of the surface poking out beyond the boundary.

However when you use the Surface Split component all that really happens is that you end up with a bunch of new trimming curves, the underlying surface is still the same shape. It will ‘invisibly’ extend all the way to the original boundary. Grasshopper doesn’t have a Shrink component, but even that would be sort of hackish, as it would only reduce the invisible portion of the surface to a rectangle big enough to contain your hexagonal boundary.

If you want the middle, you may have to use the Area component and get the centroid point. Perhaps afterwards pull that centroid back onto the surface if it’s not expected to be coincident.

Because you can only cleanly break a surface along its u or v directions, and the Split Surface component accepts all curves. Your trimming boundaries were hexagonal, so it’s already a logical impossibility to end up with an untrimmed surface afterwards.

Since the surface boundary has to be represented by a trim anyway, the operation is least destructive if the underlying surface is left fully intact.

I meant “shrinking” in the _Shrink Rhino command way. By shrinking a surface you cleanly cut off all regions along u and v directions until the bits you cut off are touching the trimming boundaries. It’s an operation which has no effect on the ‘real’ portion of a surface, it just removes whatever ‘hidden’ portions it can.

I do not understand this phenomenon. Comparing this to a real life scenario, If I draw full hexagon inside the boundaries of A4 size piece of paper, ( I mean hexagon’s edges do not co-incide with the edges of the paper) and cut along the hexagon’s boundaries using a blade, I will have a clean hexagonal piece of paper with sharp, clean edges. The rest of the paper I can discard.

Analytic surfaces in Rhino are defined as mathematical functions of two variables, u and v. Let’s ignore for the time being discrete surfaces (i.e. meshes) and subdivision surfaces, which are a patchwork of the surfaces we are discussing.

So a surface instance is basically a collection of functions in the form f(u, v) = something. There’s a function which outputs the 3d coordinate associated with the given (u,v) pair. Another function which outputs the normal vector, another which outputs tangent planes, or curvature.

There’s very rarely a hard limit on the values which the u and v parameters may adopt, the mathematics works fine even if u equals 50 billion. However, mostly the functions involved are types of polynomials, meaning something in the form a \cdot u^5 + b \cdot u^4 + c \cdot u^3 + d \cdot u^2 + e \cdot u^1. Using polynomials is the easiest way to define types of geometry which have a controllable level of continuity and it doesn’t hurt that maths like this tends to compute pretty fast on your standard computer CPU.

The problem with putting huge values into polynomials is that the outcomes tend to be double-plus-huge, just because \text{huge}^5= \text{really, really huge}. So, to reduce the possibility of ending up with coordinates which are enormously far away from zero, limits are imposed on the u and v parameters. We’ll say that the surface is “defined” in the interval -10.5 \leq u \leq 21.9

The ability to impose lower and upper limits on the two surface parameters means every surface is at heart a rectangle, whose edges are controlled by the (u_{min}, v_{min}, u_{max}, v_{max}) values. You cannot define a hexagonal boundary if all you have is four degrees of freedom, let alone any freeform boundary.

I am still not able to understand. Software these days allow large areas to develop like entire cities or neighborhood. Then how does largeness becomes a limiting factor in defining hexagons?

All there is in my posted 3dm file is a small surface which is not even a size of a chair.

The surface is approximately squarish with 200mm straight length, if not measuring the curve. Even that at the maximum can become 1500mm if we measure the curve.

Nothing I said has anything to do with physical size. I was only trying to explain that if you set lower and upper bounds on two parameters, what you end up with is a rectangular space. And by ‘space’ here I mean the configuration space of the parameter pair, not the 3D space in which the surface may be positioned and oriented.

The point I was trying to get across is that there’s a good mathematical reason that Rhino doesn’t support untrimmed, hexagonal surfaces.