To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
AI uses data to “guess” the most possible outcome. LLM uses that to pick the guess that has the highest probability to “sound correct” to human, and it is affected greatly by the data it used to train.
One thing which is very different is that AI/LLM doesn’t take responsibility of what they say. Depending on their training data, they may tell someone to kill themselves if the human has incurable disease and ask for possible treatments. It is definitely odd if ever happens in human conversation. But because you don’t like the answer and don’t think it is “correct”, you will say the AI is “hallucinating”.
Like talking to a lion, you can mimic a roar but it’s up to the lion to decide if it sounds nice or rude…