Request to buy & test your GH shape [PhD research]

My research focuses on the perception of 3D shapes. I was the first psychologist to publish research using grasshopper. Now I’m working on my PhD dissertation and for part 2 I could use your help!

I have a theory about how people perceive 3D shapes and so far I’ve tested 20 shape-families (GH algorithms that generate abstract shapes) I’ve designed myself. That was part 1. Now for part 2, I need to test if the theory generalizes to all kinds of shapes out there. Naturally, the best test would be with shapes I did not design myself. Thus, my request here for 20 good shape algorithms.

Shape details:

  • The shapes need to be abstract (i.e., not look like anything in particular) and should have a fair amount of complexity.
  • The algorithm should have at least 2 manipulatable variables (preferably continuous) that produce changes in the structure of the objects.
  • Each algorithm should generate at least a 20 objects.
  • The shapes do not have to have any symmetry.
  • A single shape from the algorithm needs to be well representable by less than 30K polygons, or a file size smaller than ~4MB when saved as an .obj.
  • The generated shapes should have good surfaces (so that all the normals are pointing outwards).

If you’re interested, please message me (or email me at akurosu@princeton.edu). If you have questions please let me know! As compensation for your help, I can afford to send you $20 per algorithm. It’s coming from my personal funds (I don’t have a fancy grant or fellowship) so if you’re willing to just donate a cool shape-family I’d appreciate the charity!

Sincerely,
Aaron Kurosu

Hello Aaron,

this seems interesting.
are you after 20 different definitions per person? or 20 definition outputs (shape-family).
or both, having in the end 20 definitions where 20 shapes from each are presented?

*edit are grasshopper plugins allowed, or native components only?
good luck with your project,
best
alexandros

Sounds interesting, but your requirements are too undetailed. An abstract shape… from a certain point, it ceases to be abstract when you define it. Trying to understand you, I see three options,
A) random shapes;
B) shapes culturally linked to abstraction (by art for example);
C) or everything that are not categorizable shapes, like the group to which belong all the shapes you don’t know in which group to put.

A) A random shape generator is a serious problem. That “randomness” always ends up belonging to a very small region of the space of shapes (the whole and imaginary group of possible shapes), which is determined procedurally. The similarity between the results has to be high in some of its properties. Otherwise, it may even go against the second law of thermodynamics, how can something generate products so different from each other (random) that you can not get valid information about what generated them? Randomness could be amplified by generating different procedures, in other words, generating generators. But always (that’s what I think at least) they would be far from being really random.

B) I in this group would put soft shapes, like an easily deformable flour dough, you mean something like that?

C) This is problematic because, thinking a little, you would not only need a generator of random shapes, but also a discriminator who discards recognizable shapes.

As you may have noticed, the difficulty of the problem has jumped too high. Your best option is to go from bottom to top, from detail to abstract, to restrict the problem to your practical needs. That said, you have to define the shape space you’re looking for. What formal properties interest you? What metrics do you need or how do you want to measure variablity? Do you need to parameterize some property? In short, what do you mean by complexity? The better you define the problem the more likely it is that someone will help you.

Anyway, by the nature of your problem I think that a generative adversial network (GAN) would be more interesting than the explicit programming. If you don’t know the state of the art of the recognition and generation of objects using deep learning I recommend you look for it, but don’t expect anything without a high budget and and a lot of time.

By the way, could you explain your research? I find this very interesting. I just read the conclusions and I’m not at all clear. I already intuit that soft shapes give very different sensations than hard shapes, are you going in this line? Are you measuring how the unknown shapes are approximated by the brain to known shapes and how are impressions derived from them, or something like that? or does it have nothing to do with it? Please tell us :slight_smile:

Hi Alexandros,

If the algorithm can generate more than 20 objects, using at least two manipulatable parameters, that would be ideal. It shouldn’t be an algorithm that generates only one object.

  • The algorithms I have been using range from 2 to 6 parameters.
  • Some of my algorithms can generate only around 20 objects, while others can generate tens of thousands of objects. It depends on how you setup your parameters and if they’re a continuous variable.
  • If the plugins used are easily attainable, that would be fine, just let me know what it needs.
  • What I have already installed are: weaverbird, kangaroo2, galapagos, exoskeleton, pufferfish.
  • Here’s a screenshot of my installed components; the icons that are generic represent: centipede, exportlegs, starling, exoskeleton2, marchingCubes, plankton, and something untitled. Grasshopper_Components

If you want, send me an email (akurosu@princeton.edu) with a screenshot and I can let you know if I think it could work. If it doesn’t look like something in particular (e.g., a cat, car, shoe), isn’t like a texture/fractal-pattern, and isn’t too simple, I’m likely interested. Thank you for inquiring.

Best wishes,
Aaron K.

The following quote gave me associations. HTM associations, perhaps:

Impression consensus was always present for faces, but not always for novel objects. […] The findings suggest that impression consensus for novel objects only emerges when certain types of shapes and evaluations map together.

Fig 1. Summary:

You are probably familiar with Jeff Hawkins. According to his research on HTM (Hierarchical Temporal Memory), our perceptions form hierarchical part-memories of details or aspect of forms, which after being discerned as a distinct feature, are stored in “memory layers” as generic abstractions or aspects, which the perception mechanism later can in realtime “associate” to earlier part-memories (layers) and so gradually, but very rapidly, form a new perception of other objects. If the form is not familiar (already stored in part or as a whole, the brain slows down significantly trying to “make sense” of the perception. The very high speed with which we “recognize” things we a re familiar with (or think that we do) from the past is thus based largely on existing stored earlier already processed complex information (on different abstraction levels), according to this theory, The so called HTM model.

Fig 2. Here some “generic features”, like edges, rounding, flatness etc, are marked, and stored for later “pattern matching” with the layered memory:

If this theory is correct, at least in part, it helps explain why earlier research in ANN didn’t really go anywhere. It also explain why we tend to see “patterns” where there really aren’t any. :slight_smile:

The HTM research also can hint about what features of form are meaningful to try to create, since some such typical features are identified in HTM. But that might be loading the dice, depending on what your research is focusing on.

// Rolf

Hi Dani_Abalde,

Thank you for taking my post earnestly. I can’t discuss exactly what the research is about at this time. The reason being that it could create a bias in who lends me shapes. If you/others want to know because they’re worried that my research will place a value on their work—I can assure you that is not the case. In other words, the research could not tarnish anyone’s reputation. If it means anything, I am not a consumer psychologist.

Regarding the rest of your thoughts, they’re great. I think you would enjoy a PhD program (if you haven’t already tried it). You get to sit all day and think about the stuff you brought up. By the way, if this is something you already get paid to do, please send me an email; I’m currently looking at my career options.

Research is a long, long, long process. I’ll do my best to remember to update this community regarding the outcomes of this research.

Thanks again,
Aaron K.

Hi Rolf,

Thank you for reading my paper and sharing your thoughts. I haven’t considered HTM as it relates to my work, but it is interesting to ponder, so thanks again for sharing. All I can say in return is that the topic of decoding 3D shapes is one of the most widely studied and perhaps the most difficult problem for vision scientists. I wish I could share more about my dissertation, but I must refrain until I run part 2 of my research.

Best wishes,
Aaron K.

1 Like

One of the things that makes perception very fascinating is our ability to deduce variants of already learned and stored patterns or information is that we can recognize shapes and forms also from different viewpoints than from the perspectives we learned them. A big problem here is to explainj the speed with which we can deduce what we see.

The explanation for the incredible recognition speed seems to require either a smart and not yet fully understood memory-structure, which makes recognition also of distorted variants “inherently potential”, or that we’re actually capable of processing variant recognition in real time starting from from simpler shapes (but then that fantastic efficient algorithm is unknown to us)

So, is this capability due to memory-structure “already wired” to efficiently provide with variants, or does it take raw processing power?

Given the relatively slow “processing speed” of our brains (very slow compared to the speed with which we actually end up concluding, or not concluding, what we see) there seems to me to be more to the memory-structure - tailored for more efficient access and processing - which we have yet to discover.

HARDWIRED FOR MATH?
It is obvious from research in the relation between geometry and mathematics (a “which comes first” kind of question) that our brains are hardwired for more things than we traditionally have believed. Clifford Algebra, later purified into Geometric Algebra gives evidence for a hardwired predisposition for math, as a representation of geometry (not the other way around) which includes visualizeable patterns of fractals and patterns which are unfortunately perceived as “psychedelic” while in reality they are so well defined that we can reproduce many of them with algorithms.

My point here is about geometric shapes and patterns which the brain simply “knows” about, without ever learning it or seen before, even examples of extremely complex math (I’m really not into psychedelic stuff or New Age or so, it’s just an observation which happened to be mapped to our knowledge about math, being based on geometry and not the other way around).

An introduction to this field, which obviously hints us about more hardwired things in our brains related to shape and geometry (even resulting in intuitive understanding of complex math…) can be found in this recognized blog post by Slehar, which presents himself like so:

About slehar

  • I write books and papers on theories of perception and consciousness based on the general insight that our experience takes the form of a spatial structure, and a spatially structure implicates a spatially structured representation in the brain. I propose it occurs by harmonic resonance, or patterns of standing waves in the brain. The brain works more like a musical instrument than as a digital computer.

On this blog you can perhaps find more ideas (related to hard-wiring of the brain), shapes & geometry in general, and shapes perceived in different perspectives, useful in your project. Most important and unexpected keyword in the field of geometric Algebra is, not rectangles and circles but, rotations! :slight_smile:

// Rolf

So is the idea to be able to reliably identify these shapes from different angles and distances? The brain itself tries to abstract this information into families, rather than assigning a “name” to every shape it encounters.From an evolutionary standpoint, to do otherwise would be too expensive. Are you trying to capture the basic “eigenvectors” under which the brain classifies shapes?

Hi Rolf,

Thanks again for your thoughts. It’s great that you take interest thinking about such things. I love doing that too. I wish I could speak more about my own. All I can say for now is that I’m looking to test more objects. If you have any that you’ve made in grasshopper that you’d be willing to share, please send me a screenshot (akurosu@princeton.edu).

best wishes,
Aaron K.

Hi Ethan,

  • Nope, the shapes do not have to have any symmetry.
  • Again, I really can’t say until I gather shapes for part two.

By the way, I checked out your profile, you have some great shapes in your portfolio. Any interest in sharing them? Also, I see that you’re based in Brooklyn. If you (or anyone else here) are going to the core77 conference next week, it might be fun to meet up and have these deep conversations about cognitive science there.

Best wishes,
Aaron K.

Hi Aaron,
I won’t be at the Core 77 conference but thank you for your interest in my “shapes”. Let me know which ones you might consider useful, and I can try to come up with defs that would generate variations.

Hi Ethan,

I just sent you an email. I hope we can work something out and I can use some of your shapes in my research.

Best wishes,
Aaron K.

A big thank you to everyone emailing me their scripts! The variety is wonderfully striking. Also, the response-time has been heart-warming quick. I’ve never felt so connected to an online community.

I just wanted to mention that I’m still looking for a dozen more shape-families/scripts. If you’re interested, please let me know: akurosu@princeton.edu

Thanks again,
Aaron K.