Although AI is not quite there yet in terms of reproducing specific details on command, it’s getting very good with overall lighting, mood, textures, and details. In that aspect it’s output can be sensational. One way to leverage this is to give it old renders as a foundation and have it re-imagine them. In this case I gave it three images to reconstruct and I think it may be kicking my b-tt.
Without worrying about feelings, how would you vote on these images ?
The tone of the AI images is very nice here. I’m not convinced by the J.J.Abrams lens flare on the 2nd classroom image.
The problem for me arises when I realise that a lot of the enjoyment of the images from renders stem from the fact that someone put a good amount of work into it. It’s a strange juxtaposition of mixed feelings of aspiration and envy, because I know it can be done; I have the tools, just not the skill.
If I decide to punish myself by looking somewhere like Artstation, the first thoughts that occur to me are often “I wish I could do that”, or “I wonder how they did this?”. When I see and know that a piece of art is AI, the wonder and intrigue is gone, even if the average objective or subjective quality is better.
I still think they are good starting points if you are struggling for ideas. But I think the struggle to create is part of the satisfaction at the end. They also do a really good job at texture creation or enhancement sometimes.
The takeaway here for me is, we all need to know how to use this stuff, because clients once exposed to images like this-
will start looking at images like this:
and go meh… can you make it more dramatic?
As artists, we can argue why the second image is “better”,
but in the end direction comes from who writes the check, suggestions come from everyone else.
when faced with a largely unrefined client base (at least here in the US) you will need to be able to make louder and shoutier images because the more of this stuff that gets out in the wild, the more it becomes the standard, not the norm…
don’t believe me? look at any image published in any woman’s fashion magazine (or any car magazine for that matter) and then look at the actual images from the camera before they were “re-imagined” in photoshop.
This isn’t new, it’s just a new to us superpowered tool for doing this type of work. (enhancing images)
The other opportunity here is to feed your own stuff to the AI (as you have done here) as a self critique and then use it’s output to guide changes or revisions to your own work mid development cycle… this can allow you some “outside” perspective that those of us who often work alone miss compared to being in a studio with peers.
It can also allow for variation (design once then AI the variations) to allow for expanded billables (render once bill for multiples) allowing you to leverage your own assets to make a bigger, more elaborate presentation in less time.
(lovely images all Thomas, thanks for sharing and opening the discussion)
Perfect take. That’s exactly the thought I’ve been sweating over for the past 12 months. AI is trained on world class images, so it has raised the bar so far up that we got to somehow leverage it, or stay behind.
A lot of people dismiss it, but it’s no fun being blind-sighted.
This is a sample of what AI can do currently: → Sample gallery
… and it has only been in existence for about 18 months. Imagine what’s coming. Midjourney V6.0 just came out with better understanding of prompts and in the next point releases they will also introduce partial fill, where you can point to area you want it to redo.
At work, we talk about this every day. We and our clients are exploring AI image Generators for creativity and inspiration. This is only a small part of our work, though. Most of it involves Optimization, Coordination, Documentation, and Construction Admin. It’s not that simple, but we hope to use it well to assist our clients and improve our communication skills.
We use it for practical purposes, such as making storyboards and eye-catching images for presentations. I don’t need photoshop as much as before for these tasks.
The only issue is, the AI did it within seconds with a prompt and a few refreshes. While it took me several hours playing with textures and lights and I am still not happy. Then the AI added an orange at the top and a slight foam and I gave up.
While AI can make all those convincing looking images, what happens if your client asks you to make this or that design change? Currently AI is not subtle enough. Also it will work mostly on things you see often. If you need to design a niche product, AI is going to be of very little help. At least that’s my experience so far.
I see the writing on the wall. It not doing something at this moment, is not a thought I’d want to rest on.
Adobe has an “Infill” option that allows specific details to be re-imagined with new prompt. Midjourney is adding this feature in an upcoming release. It’s getting tighter and tighter.
Imagine a day where an AI render engine produces a photo or art for you straight out of Rhino’s shaded view-port input, or OBJ files. You’d give it a shaded viewport and a prompt for mood, lighting, and photographic style and produces a world class image.
Well that’s what SketchUp is doing right now with stable diffusion.
I’m not smitten with the results just yet… it’s a lot of hit and miss, and not very reliable/consistent as to what the result is going to be. Maybe it would be more consistent if you knew more about which prompts work well and which do not.
I’m still having a lot of fun with it though, using it more for idea-generation than for final results.
Have you had any success with revision?
If I ask Midjourney to give me a mid fifties retro chair it will do a great job.
But if I select one of the images and ask it to make the legs thicker it gives me another chair.
I think at this time Midjourney is good for image generation but not revision.
Using ‘seed’ or the image as part of the prompt helps but not enough.
It doesn’t have an infill option yet. I wonder if you open the image in Photoshop, highlight the legs and tell it “thick retro chair legs” if it will do it. (I don’t subscribe to PS at the moment).