A colleague tells me that Sketchup runs far more smoothly if pngs are used for textures. I’m wondering if there’s any guideline with regard to image file type used in Rhino?
PNG files are more efficient since they are smaller than JPGs.
I prefer using PNG files for that reason.
Objection. That depends on the - lossy - compression quality of the jpg. PNGs are compressed, too, but lossless, which makes them bigger most of the time.
I am reminded of the old adage, that all generalizations are false, including this one.
The take-away is to use whatever works best for your needs and situation.
Traditional JPGs don’t support transparency.
The higher the jpg level, the less fidelity you will get to sharp edges. Fuzzier…
PNG you dont get as much compression… But you get sharper edges.
Less color graduations in PNG than in JPG also…
also because jpg’s colors are quantized, the number of indexed colors or shades of will give you dithered transparencies… Unless you use ranges of colors (values hsv, etc…) - think of the photoshop lasso tools (lots of options)… I did that for a few years… Never perfect without manual lassoing some areas with jpgs!
Nonetheless transparency with jpgs is doable…
Im supposed to be an expert with compression. So i read a bit more about pngs - lossless compression - so they are runlength encoded (or pattern matching). But colors stay true for the most part. Edges also hopefully. In JPEG edges are averaged out (quantized and reduced to as least possible variations on a scale that is smoothed out - hence the loss of sharpness or fuzzy edges).
Perhaps McNeel could clarify a technical point and put this to rest:
If they only decode a given JPG or PNG to bitmap once then use the bitmap even if it’s used in many places (like a block?) then the amortized processing differences should go to zero if you’re comparing apples to apples (no alpha channel).
If you need the alpha channel, then you’ve probably already made your choice of format.
If the assumption above wasn’t accurate and they decode every time they render so they can scale/transform then maybe the decode time matters even more.
If you are loading a huge number of JPG’s/PNG’s, then the decompression speed is both important and might be type-of-image-dependent, such as photos vs vector art. You’d probably need to test but I’d assume Rhino isn’t re-inventing the image file format code so the generally available benchmarks should probably be reasonably accurate.
So you might have to benchmark a little with images of the type you will actually be using.
Should be easy to see comparing a zoomed/macro render of the image JPG or PNG in question and how its distorted in your rendering engine. Did i get the question right?
Im not McNeel! Just giving my theorerical knowledge on rendering what is color/hsv values quantized or not.
I deal with lots of data. Lots of files by the hundred millions or enormous files - by the GBs or TBs. If something is compressed via formulas (jpg for example) it will take more time depending on the algoritm used - at the cost of quality (lost fidelity to original data - think of copies of copies with lost data/details lost each copy. Still takes computing.
PNGs are run time length encoded/decoded so it takes about read from disk time x 1.5 or 2x the speed to decode (linear).
The more files there are, the worse the processing time. So if we are talking about 5 jpegs in 5000k pix square pictures it might be faster than 300 pngs. PNGs take 2x longer to read than raw images. JPGs take 10x less time to read from disk but JPGs take 10 times to decompress in memory.
Then there is quality or types of decompression algoritms and screen refresh time needed for the final customer display that need to be synchronized.
There are limits to precision (fidelity/loss of quality) vs compression and images that can be handled.
Some background infos at the high level - doesn’t cover use/cases.
What AI can do with details today is amazing for sure! What I heard today that was amazing is that AI is just a parot alucinating on the details it learned!
Also worthy mention, have fun extrapolating! Super computationally inept IMOHO… But it leads to many wonderful technologies like deduplication…