Principles of Lighting and Rendering with John Carmack


(Vicente Soler) #1


Carmack mentions Caustic Graphics in part 3.

People that are interested in rendering should already know all of this stuff. The truth is there are people that use renderers on a daily basis that lack many of the fundamentals.

I share Carmack’s view of setting the materials to behave like in the real world rather than trying to tweak them using ad-hoc methods until they seem to look good.
Brazil is a good example of a renderer that still carries all the legacy ad-hoc methods he mentions combined with some more modern rendering stuff.


(Andrew le Bihan) #2

Yes - but I suspect that we still have to push in a the direction of people not having to know about these things. I fully agree with you about using real-life material properties, but of course then you get the issue of - what happens when you want a specific effect that’s not physically correct?

I think the best production renderers will always allow the artist to twist the physics a little bit.

Carmack is coming from the world of games, or “online rendering”. We’re only now starting to see online raytracing - both in the gaming world and in our world (Neon, for example) and it’s an exciting time because obviously the games world has a lot more money to throw around than the CAD world. Seeing raytracing enter the mainstream with people like Carmack is very, very cool - and I know he’s been very involved in what Caustic have been working on.


(Vicente Soler) #3

Well, you might not want to know all about rasterization (although it’s the render method you mostly use, every time you are modeling) and vertex lighting and so on, but even if you are using a Maxwell like renderer you should at least know the physics of light.

There’s nothing wrong with allowing the artist to bend the physics, if that’s what he really wants to do. The current problem is with supposedly real world materials definitions that in reality are not physically correct. Most of this has to do with making them look good in an incorrect setup (lighting, gamma, and so on) and to a lesser extent trying to get it right in the raw render without any color correction in post.

As Carmack briefly mentions, he has already been building sort of offline raytracers/path tracers for years now for his games to bake lighting into textures, but now he is starting to see in the near future feasible real time applications.

This is trivial, but I also found interesting his definition of biased/unbiased rendering, which by the definition of the word makes the most sense, but most people don’t seem to use the term the same way. Basically only a pure pathtracer would be unbiased rendering. Biased doesn’t necessary have to mean it wont converge in a true solution given enough time, it will just happen to be biased to shoot more rays in the more obvious directions.