Well, you might not want to know all about rasterization (although it’s the render method you mostly use, every time you are modeling) and vertex lighting and so on, but even if you are using a Maxwell like renderer you should at least know the physics of light.
There’s nothing wrong with allowing the artist to bend the physics, if that’s what he really wants to do. The current problem is with supposedly real world materials definitions that in reality are not physically correct. Most of this has to do with making them look good in an incorrect setup (lighting, gamma, and so on) and to a lesser extent trying to get it right in the raw render without any color correction in post.
As Carmack briefly mentions, he has already been building sort of offline raytracers/path tracers for years now for his games to bake lighting into textures, but now he is starting to see in the near future feasible real time applications.
This is trivial, but I also found interesting his definition of biased/unbiased rendering, which by the definition of the word makes the most sense, but most people don’t seem to use the term the same way. Basically only a pure pathtracer would be unbiased rendering. Biased doesn’t necessary have to mean it wont converge in a true solution given enough time, it will just happen to be biased to shoot more rays in the more obvious directions.