This question relates to rhino because I would like to accomplish this task within the application. Although I’m assuming your suggestions will lead me elsewhere…I figured this is a great place to start.
I want to import a model of a gothic church into rhino. And then import a kitbash of let’s say for example projector parts…Then I want to write a code to recognize shapes within the gothic church model (triangles, squares, circles).
Then I want to write a code to replace those shapes with items from the kit ash. I prefer the items that are selected for the corresponding shape to hopefully resemble those shapes in form…How the items are selected doesn’t matter to me at this point in time, I’ll leave that to you.
I’m led here because of the crow component, and I’m guessing neural networking / AI capabilities are needed for the recognition and output.
Sorry for the novel. If it’s confusing please allow me to elaborate.
Thank you In advance!
Here are images that have inspired me to this idea…
So you’re looking for a find…replace for shapes? The only hard bit I think is the detection of shapes. And if you’re only looking for triangles, rectangles and circles that’s not too hard either. Triangles are any closed curve with linear segments and three tangency discontinuities. Rectangles have a few more rules but the code for detection is still very scriptable. Circles are any closed curve with a small enough deviation from the fitted circle through a sampling if it’s points.
Doing the detection part in C# or python will definitely pay off here I think. Grasshopper is not great at representing logic with lots of conditionals.
If your best option is to use machine learning (ML), you will need hundreds (or thousands if you can) of supervised data, pairs of inputs (your gothic churchs) with their corresponding outputs (the buildings represented by basic forms). Can you generate this database?
If you only have to do it in one model, don’t use ML, and look for isolated patterns in each shape (or whatever, it depends a lot on exactly how your model is) or do it “by hand” in Rhino, because you’re gonna take 100 times less than doing and training a (3d) learning algorithm.
@AndersDeleuran I’ve seen the solution, because I was the one that chose it. I learned in that post it was kit bashing, and a SubD modeling of multiple 3D objects. Now the question is how do i accomplish this.
@Dani_Abalde I think your right, I don’t think at my level of expertise i would be capable of accomplishing this before the project is due. I am looking for more of an automatic approach.
@DavidRutten Thanks a lot! actually this script is surprisingly easy to read, and understand. I am going to play around with this a little and see what i can accomplish. I have many more questions so please keep checking your messages lol
Get a collection of mesh objects that you want to incorporate in your design and import them into Rhino/Grasshopper.
Come up with some kind of base geometry that you want to populate.
Populate the base geometry with random points.
Choose a random object for each point, transform it (i.e. rotate, scale), and move it to the point location.
Now, you can play with the random seeds of your point population and the random component of your random object choice to get different results.
If you come up with an iteration that you like, bake/save it, and keep on searching until your satisfied.
Delete one side/half of the meshes and mirror the remaining ones to get a symmetrical object. This can be done by hand in the Rhino viewport or in Grasshopper.
Export the as OBJ.
ZBrush
Import your OBJ.
Use Dynamesh to bring it all together (union it into a single hull).
12a. Subtract the inner volume (could be a slightly smaller version of your base geometry) to hollow out the interior.
12b. Or give it a thickness (i.e. outer wall/facade thickness).
12c. Maybe re-mesh it, depending on your number of polygons.
Export it as an OBJ.
Rhino
Import the OBJ.
Introduce architectural elements (i.e. windows, stairs, patios, floor slabs, etc.) to give the illusion of it being a building.
You can extend this straightforward workflow with scripting and machine learning, but I’d start simple first.
The above approach is as “automatic” and “fast” as it gets. Machine learning isn’t automatic, since it involves learning programming, getting to know machine learning, training a model unsupervised, or spending hours on assembling valid datasets for supervised learning… I’d say, if you are not proficient in Python or C#, don’t even think about machine learning.
Your second image, by the way, shows a completely different thing. In my opinion, this has nothing to do with kit-bashing. It’s either a hand-modelled/sculpted object with some kind of “crazy” height/normal map, or some kind of particle script/workflow. My bet is on modelling though!
I think such a task is better completed if the model you have is feature-based. Not a one created by free surface modeling.
This way you identify a feature of a certain shape and by changing the reference you change all instances (applying every transformation that was done on the instance in the process)
for this I suggest FreeCAD it has python scripting so you probably can use AI on the features there. Although as every open-source project you don’t have many functionalities but I think it is worth investigating.
@diff-arch whoa man, extremely impressive. I appreciate you taking the time for this. I’ll have a go and see where it leads me. Never played with zbrush but that part looks easy.
@DavidRutten wish I could select two solutions here. Thanks for the script. I’ve been playing with it all week, and I’m still working out a couple issues I’ve found by pushing it further. Thank you. Also good eye on the mandelbulb. Pretty interesting!
Everyone has been really helpful, I’ll update with finished product Incase someone in the future has a similar question (highly doubt it). Frankenstein shit