My old rattle gun

decided to model up my 1st rattle gun purchased for me for my 13th birthday by my dad.

Image via rhino model>runchat>photoshop>lightroom

input for the image (new patch command in v9 for the win)

16 Likes

fwiw, if you have not played with lightroom for editing your 3d renderings… you are missing out on some really amazing stuff.. The looks you can get are pretty amazing.

3 Likes

That’s some good results it preserved the details nicely

Does the rattle still work?

That’s the thing I like about runchat… it keeps the model completely intact and simply renders it based on the input images and prompts I give it.

I have used lightroom for photos, but never played with it on a 3d rendering… the results are quite fun..You can change the mood of an image quite a bit…

Yes it still works… I’m using it to work on an old 73 mini cooper with my daughter these days..

4 Likes

Nice

I remember seeing some great touch ups with Lightroom back in the day when i was starting with rhino

It must have developed quite alot by now

it’s pretty amazing actually… I will be adding it to my 3d modeling toolbag going forward.

1 Like

Fantastic rendering.
Where did you use the Patch? How big is the difference between the previous surface and the current one? I’d be curious to see the difference.

The texture on the rattle looks great. What I find a little bit odd is the background. The lighting on the ground makes the space bigger than you would expect.

I haven’t tried runchat this way yet. Was this just prompt based rendering or also using reference images?

It would also be great if it can eventually give back the textured model(s)

1 Like

the transition between the handle and the body is a mix between patch and blendsrf. I did a bunch of iteration on that area to get the highlights to flow like I wanted. I started with all patch but the light was tracking weird at the back, so I redid that area with blendsrf, then re patched in the front section. It’s not class A but as Sky likes to say , it’s class “eh”. which is fine for what I needed here.

3 Likes

I am still a runchat noob at best… but yes this is all prompt based with a few reference images to generate the base image, then I edited some stuff in photoshop, then tuned the lighting in lightroom.

I believe vizcom is able to generate a textured model.

1 Like

Hi! Awesome model and rendering—love the details on this classic rattle gun. Could you share more on the workflow? Specifically: Did you create UVs in Rhino (or elsewhere), and how did you handle texturing? Is the final image a full 3D render, or more AI-assisted via Runchat? Also, tips on using Lightroom for those mood adjustments? Thanks!

From what I understand, the model image in the first post is the input. No UVs, no textures.

hey Alan,
the model in the post is the only input… and a screen shot of it at that!

no uv’s, no texture mapping, just a screen shot and a prompt. I did feed it a reference image of a similar object with the finishes I was looking for as a guide. but that’s it.

Once the image was generated, I did a few rounds of prompt based iteration, (make this part black, make the windows higher and out of the scene, but keep the lighting as is, etc..

I then took it into photoshop and retouched a few bits I didn’t like, then into Lightroom (which is my shiniest new toy currently) and did some iterations with lighting, color toning, contrast adjustments etc…

all in all it was a very fun, very exciting and very rewarding process with the result being something that is quite a stand out and departure from my typical rendering stuff.

Is it perfect in every way, god no… there is so much more to learn about these tools, but I’m looking forward to the ride. The speed and power with this stuff is pretty intoxicating. I get the pushback, if all you are doing is numbly typing a few ideas and then taking what it pukes out, sure it lets you be super lazy and do that. But… if you actually use it, and bend it to your own vision…wow…what you can get the genie to do for you is pretty awesome.

for instance, Vizcom is advancing this game daily and gets more amazing every day I check in with them..

traditional rendering may be going away sooner as opposed to later IMO..

1 Like

Oh yeah LR is my main editor for photography but Ive been using it for renders now along with PS.

1 Like

Yes, so true. Exploring is so captivating. Instant feedback.

Yes, for exploring concepts. But it’s like a 3D model: we need both the geometry and textures for good reason. In game development, for instance, you require the complete virtual object. Ultimately, this gives us more prototyping options. Once you’ve decided on the representation, you can build it fully in 3D or in reality.

1 Like

I think Nvidia already presented a solution to this two years ago. There was a tool I cannot for the life of me recall the name of, that basically meant there would be no need for graphical artists. It generated 3D objects, textures, ready to integrate; based solely on a prompt.

NVIDIA Magic3D feels so old : Magic3D: High-Resolution Text-to-3D Content Creation

NVIDIA’s newer tools: LATTE3D (2024) generates textured 3D in seconds; Edify 3D (2025) creates high-quality assets in ~2 minutes; latest AI Blueprint (2025) with Microsoft TRELLIS prototypes scenes rapidly from prompts.

They all look toys compare with this rattle gun!

1 Like

Indeed, this is hardly suprising.

For 2D, the image can draw from an entire scrape of places like Artstation, thousands of historic 3D renders, and I guess start from a better position.

3D I guess is probably considerably harder, as it has to do all of the UVs somehow, drawing from maybe smaller data (photgrammetry?), and there is probably orders of magnitude fewer examples where all orthographic views are available to scrape. So it does seem to produce a bit of a mess, that may be good enough for original Oblivion levels of quality experience. I also guess the textures are drawn from an unknown colour space and lighting expectation, so it has little generality.

I have noticed with these things that it does like to draw from a lot of the UV-heavy “dirty” texturing, even without request. But again, when you go on places like Artstation and Behance, the cyberwave “gritty” look was, and still is, quite popular.

1 Like

Also what’s interesting here is that the can is actually hidden behind the rattle gun, so the image is technically correct. Did you give it as a prompt?

What surprised me is how accurately the AI reconstructed the

shadow shape of an object we never see.

It understood the volume, the height, and even the contour of the can, projecting it onto the floor as if it had real physical presence. A nice reminder that sometimes the AI “sees” more than it shows. Light behaving consistently even when geometry is occluded.

yep, that was pretty impressive, it did that “on it’s own” I did not prompt that.

my prompt actually was pretty simple:

aged aluminum surface for the tool, it’s set on a well worn garage floor, include dirt and oil stains on the floor. The space is an old warehouse or auto garage with golden hour light coming through warehouse style windows. (which is exactly the prompt I’d give myself to then hand assemble and fiddle those elements in a traditional render)

I did a few iteration prompts, “make this part black with radial scratches”

”move the warehouse windows up out of the frame but keep the lighting effect they generate visible.”

then some fiddling in photoshop and lightroom.

1 Like