Introducing RhinoBanana – an AI visualisation plugin for Rhino

Hi Johan,

This is a really cool tool! I am wondering about the Material guidance where it says in the Key features that you can send a rendered preview to guide the overall materials. How and where do I do that? Also if I can make a suggestion/wish; It would be very convenient if it was possible to “lock” all materials and scene settings from one render so that if you only update the view of the object in rhino, the next render is identical apart from the changed view. Sort of how you would use a render software.

Best,

Mikael

1 Like

Hi @Mikael2

Thank you for your question. I think you’re right to be confused about this—at the moment, the UI doesn’t offer much guidance. This is something I plan to improve in the next version.

First, to answer your question about using Render Preview as a material guide:

In the Prompt tab, generate a prompt and turn on “Include render preview”, then save the prompt.

Now, in the Render view, select the prompt you just created.

Note that you will now see a preview of what is being sent, and under “Summary” you can see the full prompt reflecting that you want to use the render preview as the material guide.

Click Render—and hope for the best :slightly_smiling_face:

Regarding your request:

RhinoBanana supports Named Views in the Select View dropdown, which can give you some consistency. I’m not sure if this is what you’re after?

At the moment, the plugin does not work with Layer States or Snapshots. These would allow you to turn different elements on and off, and even apply different materials to the model. In my experience, though, this can become quite confusing once you start editing the model, and maintaining these states as new geometry is added requires a lot of work. Because of that, I’ve never fully embraced these features.

I’m not entirely sure which of these you were hinting at, or if I’ve misunderstood your request.

I am, however, considering introducing a batch render feature in the future, so you can render multiple views in one go.

I hope this answers your questions.

Best,
Johan

1 Like

Hello again, just received an “Error: Failed to fetch” for 4K rendering. No need to top up my credits again, but it would be good if the system is more secure.

1 Like

Agreed, this is a very serious issue. Thank you for reporting it. I believe I’ve tracked down a log entry from around the same time that could explain this behavior. I’ve just pushed a fix to the backend (the plugin does not need to be updated). If you, or anyone else, see this again, I’d love to hear from you.

Best,
Johan

1 Like

Yes, the 4K render seems to be more stable now! Another thing I discovered was that the output sometimes deviates a lot from the viewport, as if the default system prompt is not strong enough. One image had a slightly lower camera angle, and another a completely different design but with the same elements.

1 Like

Ok, this is good to hear.

Regarding the issue, I think part of this is an inherent limitation of the current image gen-AI models. That being said, there is properly a lot that can be done to improve this. Have you tried to overnight the default system prompt? You could try to add something like:

The provided image is a fixed camera reference.
You must treat it as a locked photograph.

Do not change:
-camera height
-camera tilt
-camera rotation
-perspective
-framing
-crop
-focal length

Do not reinterpret or redesign the scene.
Only improve materials, lighting, texture, and realism.
Any deviation from the provided image is incorrect

I’m thinking about whether it could be helpful to add an “edit image” feature where one opens a rendered image, selects /draws an area and prompts a change. (Photoshop “Generative Fill”) basically, but using NanoBanana. There are a lot of places out there that offer this kind of service. But maybe it could be nice to have in the workflow?

What do you think is the best way forward?

Best,
Johan

test 1


User prompt

Photorealistic studio render of a transparent plexiglass wedge heel, premium industrial design product shot. Crystal-clear PMMA material, flawless transparency, subtle internal reflections, realistic light bending, clean caustics on the ground plane. Sharp yet softened edges, CNC-polished finish. Neutral background, high-end product lighting, no noise, no distortion, ultra-sharp details.


This is the render made by ChatGPT using the same initial image and the same prompt. Banana’s result is clearly better; here ChatGPT even rendered the plexiglass shape, which was not requested.

1 Like

Hi @Johan_Lund_Pedersen
I can confirm you that this would be extremly useful for Product designer as well (and probably jewelrs ).

I’m a bit confused about the costs: in your website you’re presenting a chart with the estimated costs of credits and credits consumption for each iteration.
Is this cost extra to Gemini costs or is the total cost of the plugin use+generation?

Could you explain this more in detail?

Thanks,
and keep working, it’s a very welcome features

Very nice. Strange that it mirrored the shoe, though. Really nice to see in the context. I’m an architect, so I’m not super familiar with the designer workflow. Is there something missing that fits in a normal designer workflow?

Yes, let me elaborate.

I cover the API costs of the model providers (right now, only Gemini). You buy credits, and those credits can be used to render images. The number of credits required depends on the model you choose and the resolution you want. For example, NanoBanana is 0.5 credits at 1k resolution, while NanoBanana Pro at 2k is 3 credits, and so on.

There are a few important things to be aware of. The plugin is very new, and I’m still learning as I go. I recently discovered that I misread the NanoBanana Pro pricing. I originally thought that the 1k resolution would cost roughly half of the 2k resolution, but that’s not the case—they cost the same. This means that there is currently a deficit on the 1k option, which is not sustainable and will need to be fixed.

The main point I want to make is that there may be some changes over the next month or so. This isn’t ideal, but I’ll do my best to keep things as stable and transparent as possible.

Another potentially confusing aspect is that credit prices are currently listed in DKK. This means the final price you pay depends on the exchange rate at the time of purchase.

Finally, because the models are provided by North American companies, I may need to adjust prices if the exchange rate between DKK and USD changes significantly. As the plugin gains traction, I will likely move to settling payments in USD, as this would result in more stable credit pricing.

2 Likes

Thanks, It seems fair.

I don’t know nothing, so my next question could look silly, but why not to charge only for the plugin use and letting people to use their own gemini account?

2 Likes

@skysurfer , I think that’s a good question. There are a couple of reasons.

In my experience, there are many very talented architects using Rhino to create great architecture, without necessarily being very tech-savvy. I think this setup makes things a bit simpler.

Second, right now Gemini offers the best image model, but that will probably change over time. There are many players in this space, and development is happening rapidly. That means users would likely need to handle multiple API keys.

Third, in an architectural office context, management would probably be reluctant to hand out their own API keys, given the risk of large, unpredictable costs at the end of the month. This approach gives more control over spending.

That said, I think all of these arguments could be handled technically, so a licensed software model could also work.

Would you prefer a licensed version?

Best,
Johan

1 Like

Let’s say that an effective use would be, for example, when I create a first render in Rhino, and the role of the AI would be that I can tell it to change one material to another, or place the shoe in a real context, such as on a shelf or being worn.

sono d’accordo, credo che sfruttare la AI al meglio consista nel contestualizzare il progetto , i rendering still life si possono fare con gli attuali strumenti

1 Like

How to install rhino banana , i just bought it but dont know how to install

1 Like

Just a quick note to say that after a few tries I’m very pleased on how it works, the results are stunning and very helpful.

I got some strange images, maybe one out of 5, but overall I’m veryb impressed on how it preserves the geometry and can follow the prompt very closely.

1 Like

Hi @Faisal_Azam

You install RhinoBanana via the Package Manager in Rhino.
Use the command “PackageManager”, search for “RhinoBanana”, and install it. Then restart Rhino.

You can now run the command “RhinoBanana” in Rhino, which will open the UI.

There is a video and a short description here: https://rhinobanana.com (the second video on the page).

You need an account to use RhinoBanana, on signup you get a small number of free credits (10) so you can try it out and see if it works for you. These credits can be spend rendering images using NanoBanana (Gemini). Once your credits are used up, you’ll need to buy more in order to continue rendering. The plugin itself does not cost anything.

If you accidentally purchase credits, I’m happy to refund them — just contact me at app@rhinobanana.com.

Best,
Johan

Hi @brvdln

Yes, you would need to be able to add your own images to the prompt. I think there are many use cases for this, and it’s a feature that should be included.

I have brought the credit, just

A mio giudizio: Vray tutta la vita! Ci si diverte di più e l’utilizzatore ha la possibilità di esprimersi, di scegliere, di fare di suo.

Queste IA stanno assumendo fin troppo il controllo. Tra qualche anno finiremo a diventare degli imbecilli cronici e patentati. È vero, si fa prima, ma certe elaborazioni sembrano fin troppo patinate, prive di emozione, di arte…

1 Like