Thanks, Scott!
Your comment on the prototype Sub-D predecessor was one of the inspirations for this further development of the HST project. I’ve always been fascinated by helmet shapes in movies and shows and how they convey character. While helmet production for industry is obvious, it got me thinking about other scenarios where experience with helmet surfacing could be applied and integrated with an interface.
As I experimented more with Grasshopper and Nurbs, I discovered numerous possibilities. Comparing the time it took me to explore these capabilities previously, the current program speeds up the process by a couple of orders of magnitude without compromising on Nurbs quality.
The concept is working well and is stable. It involves guiding a parametric model through a designer-friendly UI, allowing designers to explore shapes in real-time with comprehensive tools for visualization and analysis. Currently, HST is designed for a specific workflow, and users are expected to undergo training to effectively utilize its features. We plan to refine the interface based on feedback from more users. The entire approach is modular, aiming to create customized versions for various industrial applications.
We’re also considering integrating HST with Unreal Engine for in-environment evaluation and character animation. However, at this stage, our focus remains on viewing helmet models in Rhino, which performs effectively even on my seven-year-old 980 laptop.
Looking ahead, we have an extensive roadmap planned, including different topologies, features like cuts and bosses, and “onion layering” of surface sets. Currently, HST focuses on establishing base surface forms and flow lines, creating “graphed” images for initial sketches, and subsequent refinement using traditional Rhino methods.
AI is also integrated into our future plans. We envision AI models being trained through a process where initial and final states of a model are set, changes are verbally narrated, and the resulting modifications are uploaded to the AI engine. The goal is to enable voice-controlled modifications to models through AI, which aligns with our project’s end goal.
I’m eager to discuss this further and explore opportunities to bring this project to market. Let’s start the conversation!