Is this the future of modeling and rendering in seconds?

The Google Imagen is a text-to-image AI engine that creates photorealistic images by parsing a human language prompt. It is powerful enough already (almost supernatural) that the authors chose not to release a public demo for ethical reasons: “The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo.”

What would happen if this tech produced models that could also be 3d printed or machined ?

1 Like

The future of Design might well be wrangling AIs, but there is a pretty big jump from photos or even video to manufacturable 3D, if for no reason other than the quality and quantity of data these things are trained on just doesn’t exist for the text-to-3D case, this work is the sum total of all of humanity’s internet searches for decades, and possibly our google Photo libraries. An AI trained on publicly available 3D models off the Net is not going to be very good…it might be good at making oddball fasteners I suppose.

You may be too eager to dismiss it and I can’t blame you since it sounds surreal. You are probably thinking of Dalle-2 or Midjourney which are exactly what you describe (mosaics of internet photo fragments stitched together in a fantastical way).

However, Imagen looks like it’s the next step above. If you pay CLOSE attention to the Imagen output, it seems to be spatially aware; in terms of lighting shading, texturing, and object occlusion.

There is strong suspicion it’s using a vast database of images to extrapolate texturized 3D meshes.

There has been a progression over the years:

  • Google Earth started with a rudimentary method of creating low-poly meshes for it’s 3D cities and trees many years ago
  • The iPhone now has the Polycam app.
  • There are papers (already two or three iterations in) that produce parallax based on photos.

Extracting Triangular 3D Models, Materials, and Lighting From Images - instantaneously.