Yes, but it’s not real life. It’s mathematics.
Analytic surfaces in Rhino are defined as mathematical functions of two variables, u and v. Let’s ignore for the time being discrete surfaces (i.e. meshes) and subdivision surfaces, which are a patchwork of the surfaces we are discussing.
So a surface instance is basically a collection of functions in the form f(u, v) = something. There’s a function which outputs the 3d coordinate associated with the given (u,v) pair. Another function which outputs the normal vector, another which outputs tangent planes, or curvature.
There’s very rarely a hard limit on the values which the u and v parameters may adopt, the mathematics works fine even if u equals 50 billion. However, mostly the functions involved are types of polynomials, meaning something in the form a \cdot u^5 + b \cdot u^4 + c \cdot u^3 + d \cdot u^2 + e \cdot u^1. Using polynomials is the easiest way to define types of geometry which have a controllable level of continuity and it doesn’t hurt that maths like this tends to compute pretty fast on your standard computer CPU.
The problem with putting huge values into polynomials is that the outcomes tend to be double-plus-huge, just because \text{huge}^5= \text{really, really huge}. So, to reduce the possibility of ending up with coordinates which are enormously far away from zero, limits are imposed on the u and v parameters. We’ll say that the surface is “defined” in the interval -10.5 \leq u \leq 21.9
The ability to impose lower and upper limits on the two surface parameters means every surface is at heart a rectangle, whose edges are controlled by the (u_{min}, v_{min}, u_{max}, v_{max}) values. You cannot define a hexagonal boundary if all you have is four degrees of freedom, let alone any freeform boundary.