Red/Green Colorblind: Does this look the same?

Hi,
I am working on a tool to simulate red/green colorblindness to evaluate design, and wonder if these compared images look the same to one who has that condition.

Some of the blobs look very similar. But other are quite different.

Ok, thanks @dale!
I’ll read up and investigate further.

(And thanks @stevebaer for constantly adding my posts to the right category )

That’s fine, it just takes away from time that I could be spending working on Rhino

I’m red-green color blind (deuteranomaly). I agree with Dale, some are the same, some quite different.

The ones that stand-out as different (for me) are:
Row 1, item 3: I know there is something (can’t tell what, on the left; I get nothing on the right)
Row 1, item 4: The 21 (?) is more distinct on the right.
Row 2, item 2: The background color of the 25 is darker on the right.
Row 3: item 1: The “No” is more distinct on the right.
Row 3:, item 3: I can see the 27 on the left; not on the right.

I can actually only see six items in the circles, btw.

1 Like

Super feedback, thanks!
I read up quite a bit on the topic last night, and this is much more complex than I thought, and that makes it more interesting too! As a designer it will be great to have a tool to evaluate how 1/8 of the male population sees the world! (and 1/50 of the female population)

I’ll hack away further on this when I get more time and have more know-how on how to simulate the different conditions.

-Jørgen

It’s nice when designers consider it. Few do*

*You know that feature in Grasshopper where you select a node and the preview geometry in the Rhino viewport turns from red to green?

1 Like

You can change the preview color…

Mitch: I always do, then my co-workers come back to their computers and get all pissed off

You might get a bit of a quick approximation by bringing the image into a Lab color space in Photoshop or similar and flattening the ‘a’ curve.

I am not sure if I understood Sam, or if you understood:
The image I posted is from my result, the left part is the viewport, and the one on the right is the result from my script.

Or did you mean something completely else than that?

Here is a better image to show what I have stitched together:
Working with this has been a real eye opener to me as a designer, and I like the tool even in the wip mode, as it gives me a much better impression of what this is like, even though this is not 100% accurate.

I just had to check colorcircles, and I would not have imagined this.
Refining the calculations so the representation is as accurate as possible will be high on my to-do list and then I’ll make a fee plugin of this. I have dreamt of making a tool for this for many years, and suddenly I can make my own in Rhino.
Thank you McNeel !

When I read Dan’s reply that there were discrepancies between the two images of your original post, I thought I would try the Lab trick to see if the result would be closer to what he was describing (which doing by typed message over a discussion board on the internet is bound to lead to misinterpretation ) The results from the Lab color space trick seemed closer to the results he was describing (or at least how I interpreted his results), and resolved some of the discrepancies between the two images that he had pointed out.

So my post was meant as a suggestion of a strategy to try that might get you closer to a simulation of red / green colorblindness than the first image you posted. I am not a vision expert, but in a conversation I had a while ago with such a person (read: second hand information so take it for what it is worth), they had stated that a great tool for simulating this was flattening the a curve in Lab color space. I’ve attached a version of this below, if the guy was right (I’m not sure if he was, but it would be interesting to find out) the two sets of images should look pretty similar to a R/G colorblind person.

Sam

1 Like

Thanks for the info!
I will definitely test this out and see if I can figure out how to simulate this behavior mathematically.

Hi @dale, @dan and @SamPage, thanks for the input!
Please check the new examples out.

I made color converters from RBG to Lab so I could tweak the Lab-value “a” to 0 for each pixel, just like you did in Photoshop, and this is now what the output looks like:

And a new test of the initial color test image:

Please check to see if this is quite accurate too:

Hi Holo, great work - it’s been a real eye opener following this discussion.

I have one question, why are the red tomatoes in the lower left hand corner so dark in the colour blind vision? The two don’t match on the right hand image and there is definitely a difference between them and the Red Traffic Light, which to my eye (i’m not colour blind) seems about the same.

I included that image because i wondered the same thing. Let’s see what the guys says.

Some of those images on the left were images I used for benchmarking the algorithms.
But that one image is much darker in the representation of red, so I presume it is for a different kind of color blindness.

There are different types of red-green color blindness, and the one I am simulating here is Deuteranomaly, the most common one. There is also Protanopia and Tritanopes.
(+ the complete color blindness known as Monochromacy )