ChatPoster

I was looking at some posts from a new forum member whose style of post was different from most. I couldn’t help thinking there were elements that reflected what one might expect from an AI chat bot which had been trained on our forum posts.

Which raises the interesting questions:

  • If AI posts here, how will we know?
  • If we don’t know, does it matter?
  • If it matters, what should we do now, before it is too late?
  • Is it too late?

Regards
Jeremy

1 Like

We probably won’t…

Most likely…

That being said, one does have to create an account here in order to post. I’m not sure if an AI can do that under it’s own initiative - currently… maybe I’m wrong about that, if so, then yes, it’s a lot scarier than it seems. :grimacing:

However I guess it should be possible for some real person to create an account now and somehow relay the info to the AI, either automatically or manually.

AI is not yet advanced enough to be able to understand the context of conversations and produce posts that are on-topic and relevant to the discussion. AI is also not capable of understanding the nuances of human language or the subtleties of social etiquette, meaning that any posts produced by an AI would likely be seen as inappropriate or spammy. Additionally, AI lacks the creativity and imagination that is necessary for posting meaningful and interesting content on the McNeel forums. AI also lacks the ability to interpret data quickly and accurately, which is necessary for understanding the complex topics that are often discussed on the McNeel forums. Furthermore, AI lacks the ability to interpret graphics, images, and videos, which are often included in forum posts, meaning that AI would not be able to understand or respond to them. Finally, AI lacks the capacity to accurately identify and classify different topics, meaning that it would not be able to determine which topics are relevant to a particular discussion or post.

TL;DR too soon, that being said you’d not be able to detect an AI from a person, unless you trick the AI by prompt jacking. The way is moving we will probably have a personalized AI in the future for each user, once we figure out the computing costs, at that point when an AI is tailored for a specific user, not even pattern speech recognition would be able to detect an AI (imo)

Sure AI can generate content, but can It truly understand content? No, It cannot. It simply generates data based on the dataset that It was trained on, at best It guesses, but It does not understand as we, humans, do.

Hope I convinced you fellow human beep bop boop

1 Like

nice! It’s common misunderstanding what AI is able and not able to do (and basically how it works beside the fact it uses nodes). I’m glad you’ve put that one right. We really need to pay attention to the fact that the datasets and rules are being amplified to a level, where we hardly understand the consequences. Nearly every AI being trained is supervised.

Maybe you missed this:

1 Like

Let’s have a rigorous definition of understanding to weigh that statement against. If you say to me “draw a car”, I can put pen to paper because I know that the word car is associated with various concepts. AI can access those same associations and produce an image of a car. Can you determine which image was produced by me and which by AI? Probably not (in my case AI will be more realistic).

1 Like

It will draw a car, but It does not understand what a car is.
A specific dataset knows It’s shape
A specific dataset knows the tokens to express what a car is

For us It’s very different, we merge all sense into one single consciousness that describes life, our consciousness is then divided in active and inactive, where the active approves or negates a specific decision, and the inactive proposes the decision (more or less so, english is not my main language)
That being said our entire world view and perception is generated by multiple inputs from different sources that generate what we call reality.

An AI does not have that ability yet, and before It could truly grasp and merge all that different data there is a very long way, as far as I’m concerned we are made of 5 type of sensors smell chemicals, taste chemicals, view refracting light, hear vibrations, touch sensor, an AI barely generates a bunch of words, or a bunch of images.
If you asked an AI to describe an image, you’d understand how little it knows really.

Generating edgy words to make the headlines is just part of our hype culture, nothing more than a stunt, It will generate less randomized and creative answers soon, It’s just marketing.

I dont think we truly grasp how complex our simulation of life and consciousness is, and how deep and intertwined it is, compared to that of an AI, which currently is in a very early stage. It is mesmerizing to see what an AI can type, but its nothing more than a prediction generative model always keep that in mind, if it says crazy, its been tweaked to say so by someone

Not necessarily. The “crazy” is just a part (far too large) of the internet/social media dataset that it is vacuuming up.

1 Like

It is partially correct but ultimately wrong in my experience
1)If we assume that this is only from the dataset, OpenAI trained each AI using supervised learning, which means if microsoft model behaves the way it does, It’s been supervised to produce that type of output by rewarding it. So your comment has a base of truth but wrong

My opinion from my personal experience :
2)I believe this is not the case, I believe the AI was trained on normal data, however they tweaked the parameters to give weird ass response as a marketing stunt, that’s because you can tweak multiple parameters in these generative models : creativity, repetition, randomness and so on, It’s very clearly been tweaked poorly to make the headlines abusing the hype train and people ignorant on this topic.

and makes them pretty superficial again. The article from your your link was very interesting, however, sounds very much like a journalists interpretation of a very technical thing and obviously supervised again by MS to avoid other scandals that already had popped up due to internet data sets. The malicious comments are probably based on actual common cracking techniques like social engineering (which was it’s proposition) and recent events that where pretty influencial (virus stuff). Finally it’s pretty reassuring it comes up with pretty common ideas…

But in the context of my questions that is moot. Can you, an observer looking at a car picture, determine if it is the one I drew or the one AI drew? That, and its ramifications, is what I am interested in discussing. Or rather, not just a picture but a post.

2 Likes

As an aside, quite a lot of that was evident in the original posts which triggered this thread… :man_facepalming:

I did a little experiment. I asked it who I am. Apparently I’m a senior product developer at McNeel since 1996.

I think McNeel owes me a lot of backpay. :grinning:

10 Likes

another article stating that AI’s tend to give wrong answers…

according to that I think not. Not because the AI’s are so incredible, but Incredible artwork exists. Another link would be some 3d application linked to an AI, but the results are a bit like the AI generated images, occasionally a mouse would not hold a trumpet, but the arm would become one with the trumpet:

who is Ben***** ******?

Ben is a technology executive and entrepreneur. He is the co-founder and CEO of Iris Automation, a startup that is focused on developing artificial intelligence solutions for autonomous drones. He has a background in computer engineering and has worked in the aerospace and robotics industries for more than a decade. He has held senior positions at NASA, Lockheed Martin, and other companies.

absolutely none of this is correct, how funny. It proves that some word occurences on the internet are just being stitched together. I don’t know how it got my steam name though and how it can understand from that that I work with drones)

I lead NASA :rofl: :rofl: :rofl: :rofl:

well, if I ask again, I am an Italian actor, once more asked, I’m a Swiss football player…it’s not very consistent=)

1 Like

Just look into the current AI services for answering emails, is not very different from a forum post problem. All those tasks are already solvable with AI, whether it gets it right or wrong is a matter of training and fine-tunning, not the technology.

Understanding is not what a human being understands, it is something more universal. Plants understand an environment much better than we do on a chemical level, and if you think that these are only physicochemical reactions, it is the same as our brain. Even when you are self-aware, you cannot escape from material reality, all of which are electrochemical processes complex enough to give us the appearance (or fact?) of choice. Our understanding is not in its conscious manifestations, that is only the result, in the same way that AIs use vectors, matrices and multiplication, living beings use biological systems but the result is (at least potentially) indistinguishable between both systems. There are many forms of understanding just as there are many forms of consciousness because that is something that every being needs to survive. Understanding consists of giving sense or meaning (with the expectation of giving a result) and that is carried out using a relationship between things. For example, you don’t need to know the numbers to understand quantity, because you can understand it by differentiating separate objects, although you can’t describe it you know that a set of two apples is different from a set of three apples. There are many ways of understanding, humans are a combination of several, so there is an architecture there that AI doesn’t have yet. However, you cannot define understanding, at a generic level, in isolation, as something complementary independent of the rest of the things, because it depends on a system that carries it out, and also, if you want to describe it, a language that represents it. AIs understand mathematically and for many cases it is just as valid (or better) than human understanding in the input-process-output scheme. Obviously it is not the same, but it is that many concepts in different human languages are not the same, or concepts that do not even exist in other languages, and you do not take into account how the understanding of one human being changes to another about the same thing, because each one of us has a different mental wiring. Understanding does not necessarily mean being right, it means having a framework of relationships that give meaning to something by separating it from the rest of things, and it can be done in many ways, either right or wrong with a broader or wiser framework of relationships.

I don’t mind how understanding is defined, or that one prefers to say that IAs appear to have understanding. Within the N-dimensional imaginary space that contains all possible ways of understanding, humans access one region of that, plants another, and AIs access another region of that space. For now ours is bigger I think.

Humans are often wrong, just like AIs. In a forum like this, it doesn’t matter what type of agent responds, as long as they provide correct information. Discourse has a validation system, someone with the organization McNeel label or someone with many responses or likes (seen from her profile) has more credibility a priory than a new user. If an AI achieves that reputation, either because it is an expert system or because it has been trained and gained many likes, for me it is indistinguishable or irrelevant if it is AI or not (in a context like this, it is not the same in other areas such as politics for example, or any other area with more moral and legal implications).

I think of these things as an extreme optimization of the problem of guessing the most likely thing to come next (next response in a discussion, the next iterations of a render in a denoiser, and so forth), so for discussions that had a predictable endpoint (given prior context) before they began, we may not be able to tell the difference, but we should not expect to find them making new, novel answers/statements

1 Like

Understanding is a complex phenomenon that has been studied extensively in the fields of psychology, cognitive science, and philosophy. While it is true that understanding may exist beyond the realm of human experience, the notion that plants or artificial intelligence possess a similar or comparable understanding to humans is simply not correct.

Plants are known to possess various mechanisms for sensing and responding to their environment, such as phototropism, gravitropism etc. These mechanisms involve various chemical and molecular processes that allow plants to adapt and respond to their surroundings. However, while these processes are important for plant survival, they do not necessarily imply a form of understanding in the human sense. Understanding involves the ability to reason, perceive, and empathize with others, which are not qualities that are typically associated with plant life nor AI

Similarly, while artificial intelligence has made tremendous strides in recent years, current AI systems are still far from possessing a similar understanding to humans. AI systems typically rely on pre-defined rules and algorithms to process data and make decisions, which is a far cry from the complex cognition and decision-making processes that humans possess. While AI systems may be able to process vast amounts of data and produce outputs based on pre-defined rules, they lack the ability to reason and perceive in the same way that humans do. For example, while an AI system may be able to recognize and label objects in an image, it does not have the same understanding of the objects as a human would.

Moreover, research in cognitive science has shown that understanding in humans involves a complex interplay between various cognitive processes, such as perception, attention, memory, and reasoning. For example, studies have shown that our ability to reason about abstract concepts such as justice or freedom relies on our ability to make analogies and draw connections between different domains of knowledge. Similarly, our ability to understand language relies on our ability to process and integrate information from multiple sources, such as syntax, semantics, and pragmatics.

Moreover, research in cognitive science has shown that understanding in humans involves a complex interplay between various cognitive processes, such as perception, attention, memory, and reasoning. Lacking such processes will inevitably producer lower quality output. For example, studies have shown that our ability to reason about abstract concepts such as justice or freedom relies on our ability to make analogies and draw connections between different domains of knowledge. Similarly, our ability to understand language relies on our ability to process and integrate information from multiple sources, such as syntax, semantics, and pragmatics.

[CHATGPT understanding of what I wrote, I asked it to draw a conclusion and It’s plain wrong/useless, props to Jakob first to notice]
In conclusion, while it is important to acknowledge that there may be different forms of understanding, the notion that plants or artificial intelligence possess a similar or equivalent understanding to humans is a perception you may have when reading a reply that simulates reasoning but is only predictive patterns. Understanding in humans involves a complex interplay between various cognitive processes, which are not typically associated with plant life or current AI systems.
[CHATGPT understanding of what I wrote, I asked it to draw a conclusion and It’s plain wrong/useless]

I agree with your conclusion, but I do not agree with your reasoning, the matter is way more complicated than you can write in a forum post, however I conclude (just like you do) that AI is fine, and can be helpful in many technical situations, however AI response will never be the same quality as a Human response, at least for the time being, even with a perfectly fine tuned dataset, due to the lack of the forementioned qualities (context, intertwining concepts, being able to reason based on similar problems, connect the dots and so on, an infinite list of human qualities)

TL;DR AI is good enough and I agree with your conclusion, however I strongly disagree with your reasoning. Prediction models != reasoning, understanding and so forth, therefore output quality is inevitably lower.

1 Like

For me the issue is not whether an AI is capable of “understanding” (however you define it) or not. It may not know what a car is or be capable of driving one (Mr. Musk would disagree here) but it is certainly capable of vacuuming up all the data that there is to be found about cars on the internet and spit it back out to you in various forms. In that sense it does ‘know’ what they are.

Yes, AI are currently “trained” on various data sets and therein lies the rub - humans are responsible for what they are allowed to “know”. While we are currently witnessing the effects (both positive ad negative) of the various “commercial” AI forms that have been unleashed on the public recently, there is nothing to stop some political/cultural/economic entity from using the technology to further their own political/cultural/economic goals. Whether they will be considered beneficial or harmful to humanity will be in the eye of the beholder.

While an AI in it’s current state of advancement is not capable of using physical or technological force to act directly on human affairs, other humans that are influenced by its conversations and (mis)information certainly are. We have only to look at how social media with its share of ‘bots’ and ‘hubots’ that are run by groups of ‘influencers’ and their myriad followers have affected elections in various places, amplified hatred, incited riots, etc.

So I am not afraid of what AI’s can do currently. But I am afraid of what actual humans will do under their influence.

6 Likes

Sounds like something an AI would say! :face_with_monocle:
:laughing:
-Jakob

2 Likes