I don’t agree with you because seems that you understand this as an exact solution problem when it is an approximation problem. The point is not to create algorithms that never fail, but algorithms that give you more value than you had before. Personal likes are not arbitrary combinations of our preferences, there is a huge causality in them. We are not so random, everything is related, the difficult thing is how to measure this relationship or how to model this causality. I am not denying the complexity that the problem has, surely it is more complex than the complexity of our more sophisticated future models, but that does not mean that they are useless or that it is a dream to approach subjectivity. There’s a reason why companies spend millions of dollars to get their hands on data, don’t you think this is enough validation?
If you agree with this but you still think the same, maybe you are approaching this from the semantic and saying things like that an algorithm predicts our tastes is not correct and I agree with that, literally it is not like that, that only someone tendentious or ignorant would say. But from a practical approach, it is correct to say that you can predict certain things based on certain data. The premise is very simple, if you have more knowledge about the customer, you can maximize sales by approximating the ideal product for the ideal customer, no matter how much you miss. And the more knowledge you have, the more you can infer and the less you need to measure the customer, because there is causality or behavior patterns, no matter if there are exceptions while it works in the average.
Imagine how useful it would be to have this kind of semantic search engines for companies like Ikea or any other company with a high variety of categorical products (many types chairs for example, but some are designed for the kitchen, garden, whatever) that can be categorized in thousands of different ways (for summer, for grandmothers, for singles…). As a customer, you label your subjectivity in these categories (even in others that do not exist but that a natural language processor can relate to) and it returns the products that most fit your tastes. Sales up, purchase time down. There are primitive versions of this kind of things and there is no technical or theoretical limitation to better approximate the tastes of the customer. This is my only point, it’s a complex problem depending on the measure you want to do.
For the world of design, recommendation systems are the least of it. This is more relevant for wholesalers or retailers. But in the opposite direction it is relevant for the designer or the manufacturer, if you can obsessively measure the components of your product and associate it with sales, you can infer the properties that make your products popular so you can design new ones to approximate the tastes of the market, even this being a dynamic and ephemeral system, you can make better decisions thanks to this trend analysis with steroids. Similar for other types of applications within the design, the goal of the tools is to help us, everything can be modeled and the challenge is not to make it realistic, but to make it profitable.