Nothing times one equals one, inconvenient behavior of mathematical operators

I am unsure if this behavior is new in Rhino 6, but I have been encountering this operation not working as expected in several ways. I have made a file with the most basic version.

Simple operations like + - * / operating on data trees return values even when one of the branches is empty. Resulting in equations like this:

null * 2 = 2

Is this intentional? Because I can account for it by pruning my trees whenever branches are empty.

Nothing plus (24.3 KB)

Edit: I just found part of the answer here:

It seems the new component behaves this way to account for it working with more than two inputs. I’d argue that with less then two inputs receiving valid values the component should not return a value. A ticket has been made to review these cases. So thanks for reading.

Edit2: The pruning solution creates another edge case where empty trees are completely pruned causing the operation to return a value while it shouldn’t.

1 Like

Well, in this case, it is the expected behaviour. The multiplication component is basically an operation in the form a \cdot b \cdot c \cdot \dotsc \cdot z. If all but one of the values do not exist, you just end up with that one value that does exist.

In most programming languages I can remember, null values result in ‘NaN’ errors. Like Javascript:

<!DOCTYPE html>
function out(lbl, val) {
	document.writeln(lbl, ' = ',val,'<br>');

var a;
var b = 2;
var c = 3;
var d = a * b * c;
out('a', a);
out('b', b);
out('c', c);
out('d', d);


a = undefined
b = 2
c = 3
d = NaN

Same goes for addition.

Null values in .NET would result in NullReferenceExceptions. NaN values result in other NaN values. Infinities result in other infinities.

The problem in this particular case is deeper, as multiplication is defined for various different types. Integers, doubles, Complex, Vectors, Colors, … and if you get a null value you don’t know what the implied type was supposed to be. So do you create a NaN double if you multiply an integer with a null? Seems weird. There’s no such thing as a NaN integer or color.

So what would be the advisable way of working around this? Doing data tree matching of a single number? Pruning and then checking to see if there is any tree left before applying the operation? Culling the results based on a ListLenght < 0 mask?

The method I currently use is to do both operations on all data and then cull the results I dont need and merge the ones I do need. This is effective but not efficient for heavier operations.

Am I correct in thinking that what you need is a way to copy all the nulls from one list/tree into another list/tree?

My way of working uses some empty lists. As not all operations apply to all lists I end up using data trees with empty branches. Currently this either proves difficult in insert and replace components (where entries get inserted into the wrong branch if certain branches are not present) or with basic maths (where null items are basically treated as zero). This problem is made more difficult because the clustering of components causes empty lists to be discarded, creating a necessity for null (or other valueless data) to maintain tree structure.

I also work a lot with empty lists and always need to go back to the main structure.

  1. sift pattern and combine instead of dispatch and merge

  2. dispatch paths (from Parma viewer or tree statistics) together with tree branch

  3. principle option on the input of components

  4. match tree

And surely some others, But this are the most used I think, makes mostly the trick for me. Maybe this helps as workaround.

Thanks Tim, good suggestions. I came by this edge case just now and I´m not sure what I could have done to prevent this edge case from being here. I can probably find a way to solve this edge case, but it won´t be nice.

What happens in the picture is two values are extracted from a list, edited and then merged. When this the edit operation (add, subtract) gets no inputs it will still give an output. As long as there is a single input in here there is no problem, as I remove empty lists. Do I really have to match the constant input to match the data tree of the variable input? Do I need to introduce an empty tree check and cull the undesired response? Should I graft the entire thing and work on splitting and merging branches all the time?

Working around this has taught me a considerable deal about managing data trees, but it feels silly having to even go into data trees when trying to operate on parts of lists.

Edit: I have solved this for now by adding a second cull after the operation to ensure the values added by the operation get culled. Only culling after operations is not a great solution however as it would likely require valid operations to be prepared even for cases which should never undergo that operation (which would have been culled prior to the operation normally). I could also cull before ánd after the operation…

Tree Sloth components might help in this scenario.

For multiplication and division, when there is only one valid value (a * null/empty), the result should be null/empty, because these operations need at least two values to make sense, if not, you are converting a null/empty value to 1, and this is more problematic than leaving the result empty in my opinion. For addition and subtraction is different, because if you add or subtract “nothing” from a value, you have that value. But if you multiply or divide a value by “nothing”, the component should warn you that you lack a value, because the operation can not be computed, no?


So 2 \cdot null = null, but 2 \cdot 2 \cdot null = 4?

In 2 * null should jump a warning for failure to assign the second parameter. And accepting this, it should not be allowed to remove the second input. It is not the same as saying that 2 * null = null, but I do not allow you to compute an operation that takes (at least) 2 parameters if you only give one.

The reason it was designed this way (which I’m not trying to defend to the death btw., I concede it is a solution with some big problems) is that if you were to add a third input I don’t want the component to stop working. Having it go blank between the steps of adding an input and connecting a wire to that input seems unpleasant to me.

A better solution would be to add an identity element as persistent data to every newly added input instead of allowing it to be blank. This is something I will definitely do for GH2, but it would change the behaviour of existing components in GH1 files. So extra care needs to be taken to ensure that current files behave the same before and after this change.

Is that a solution that you can live with?

Yes, I would like to have default values in all parameters whenever possible*. With this I can deduce that you are going to return null whenever there is a null input?

*And by the way, what about an option when taking a component to the canvas, perhaps by pressing a key, that jumps a window to adjust the operation parameters quickly (those params in which the component operation is not applied (such as geometry, this would be item parameters), but modify the operation, as the index in the ListItem).

Yeah that’ll be possible once there’s a default value in there. If you still want nulls to not affect the outcome, you’ll have to replace all nulls with the identify element beforehand using another component.

I’m not exactly sure what your footnote wish is about. Can you give an example using a GH1 component and the sort of UI you want to see?

The Remap or ListItem component, both with 3 inputs. In GH1 all the inputs have the same identity (component input parameters, as always), but do they really have the same computational meaning? The second and third parameter of both components is much more similar to each other than the first parameter, somehow it has a different intention. The second and third parameters are arguments of the operation you are going to perform, while the first parameter is the actor who is going to perform or be the object of the operation. Is this difference relevant? Well, I’m not sure. But at least it is a cut in the parameter space (the group that contains all species of function arguments, whose dimensions are the different characteristics of the parameters) that separates those that make sense to give them a default value and those that do not. That’s why this idea appeared, because it would be useful to have a window to quickly assign the parameter values of this “operation parameters” instead of having to do it one by one with so many clicks. I think other cuts like this can give new features a chance.

I don’t expect any visual change, but it would be interesting to explore which are the attributes that categorize the classes of the parameter space. For example, shouldn’t it be a special identity to be an input or output parameter of the definition? It has a specific meaning and use, or rather, another context. Peacock1 will allow to draw something like a object role in the graph. Red are definition sources, pink are definition targets, orange are definition inputs, blue are definition outputs and green internal objects. The problem with GH1 is that not all objects in red are really sources of definition, I just want them to be the ones on the left side. If there are these kinds of identities or actor roles, I could specify what the sources of definition are, so I could automate the use of the definition in an external context instead of having that document open. This is kind of issue solved in Peacock1, since I have a tool that (using a window) allows me to select which objects are the real source parameters, so I can compute automatically several times the def. by automatically varying the parameters, in order to analyze the performance or see if something breaks in any parameter configuration. This is useful when you design a product configurator, for example in ShapeDiver, all value configurations must work.

This also works (quickly) without messing with your partially empty path structure…instead of multiplying by x, divide by 1/x