I also reached out to them on Twitter but they directed me to this form. I followed up with them on Twitter with what happened in this screenshot but they are now ignoring me.
I also reached out to them on Twitter but they directed me to this form. I followed up with them on Twitter with what happened in this screenshot but they are now ignoring me.
Selecting a canned-text response based on simple keywords is a long way from AI, and it’s foolish to equivocate the two of them.
Also, chill tf out, and don’t be so aggressively presumptious. I have enough experience with the topics in question to point out how misleading this statement is.
I suppose you didn’t click the link I sent – either that, or you think you know better than some of the leading figures in the field of AI… it’s not “a long way from AI”, it IS AI in its design and its purpose. It’s misleading to assert that it isn’t AI because it doesn’t meet your arbitrary complexity standard.
I doubt you have any relavant experience in AI research or engineering based off of how you treat the concept of AI and even data science in general here… boiling the bot down to “just a series of if statements” – and then implying that lack of complexity makes it not an AI – is extremely naïve and is itself misleading, you can do that for anything, every program is ultimately just a bunch of if-else/goto and simple math operations. It’s just an attempt to conceptually reduce it so much that it seems absurd that it could be in the same category as more advanced AI. Despite the name, AI doesn’t have to meet some bar for “smartness”, it’s a ridiculously broad term and any program intended to mimic human behaviour falls under AI (no matter how poorly it does it).
You confidently and rudely/condescendingly asserted something that is very blatantly ignorant of the subject of AI, I find it reasonable for me to assume that you had no idea what you were talking about, and I find it reasonable to very plainly call you out.
Also you misused “equivocate”… it’s not a word used to compare two things, it means using double speak/speaking evasively, “to equivocate the two [AI vs. chatbots]” doesn’t mean anything. Did you mean “equate”?
I did click your link. The accepted answer there states:
Again, I don’t think that selecting basic responses based on keywords found in the string meets the criteria for being qualified as an AI, as anyone with experience of a chat bot this simple knows it won’t hold up the illusion of “intelligence” for very long.
I did mean “equate”, you’re correct. The rest of my point remains - a very simple chat-bot like this is leaps and bounds from what would be termed an AI these days. To equate the two is misleading.
The answer you’re referring to (not the accepted answer but the highest voted yes) also says
The ambiguity arises when you ask what it means for “if a human behaves the same way”. If you word it like that then something like ChatGPT or Stable Diffusion wouldn’t count, because you can easily see they’re not human even if you didn’t know first, but then this tic-tac-toe bot would count. It’s a definition they didn’t elaborate on enough so we don’t know what they mean by “intelligent human behaviour”. Maybe “intelligent human behaviour” extends to just giving somewhat relevant answers based on certain words/lexemes in the sentence? Certainly that intelligence is human, I mean a dog or seal can’t do that, only a human. As it stands there is no complex art or chat AI that can’t be distinguished from a human, so if we want to restrict it to actually acting like a human then AI doesn’t exist, unless we’re talking about simple tasks like tic-tac-toe, and there are programs that surpass humans like chess engines which also wouldn’t be considered AI, which I find a silly definition to go by. “Human intelligence” doesn’t mean “as smart as the average human”, it means sentient-like capacity to make decisions, even if it’s extremely simple. The task itself doesn’t change what counts.
That is why I find the take by the pioneers of AI a lot more useful – they don’t put some arbitrary subjective limit on complexity that disqualifues seemingly obvious examples of AI like the IEEE’s ambiguously worded definition does.
What’s in “these days” doesn’t exactly matter – sure, average people nowadays often only use AI to mean complex ML/NLP AI and not the other types of AI, but that doesn’t stop other AI from existing and being AI lol. And especially since people use it the previously common way too still – people who play video games will still call the bots/NPCs “AI”, or call the pathfinding algorithm “pathfinding AI”, for example. And a majority of data science/AI literature will still call simple AI like this one in the post “AI”.
It’s easy to see why asserting your poor definition of AI as the correct one and anything else (even the definition that most professionals in AI agree to, which the comment I sent has a link to multiple with reasons to their credibility over others, one is literally the 4th most cited book in this century) as “misleading” is pretty annoying. You’re trying to gatekeep AI and put your own subjective interperetation of one specific definition on it and ignore multiple leading AI professionals’ definitions lol…
Im not attempting to “gatekeep” anything. I’m pointing out that drawing a parallel between a keyword-based chat it script and a full LLM is disingenuous.
Wdym drawing a parallel? I literally never did that lol, I just said it’s AI even if it’s not LLM-level AI despite “just being a bunch of if statements”. They don’t have to be the same complexity in order to be in the same grouping. My original comment was exactly “it is an AI tho”, I didn’t say or imply “it’s an advanced neural network capable of taking on the greatest of commercial LLMs”