To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
AI is a very broad term that includes more than machine learning. Assuming you mean LLM
The differences are:
Also if you cannot tell difference between real human and AI it’s only because your interaction with AI is limited to text. If you can meet it like a real human, it’ll be obvious that it’s a computer not a person. If the image is blurry/pixelated enough, you couldn’t tell a car from a house, that doesn’t mean cars have become indistinguishable from houses.
To add to this, this is how llm sessions ‘get around’ the experience issue: with every query/command/whatever the whole context and passed conversation is sent to it to be reprocessed. This is why in long sessions it takes longer and longer to generate a new response and why it will forget everything it ‘learned’ from your session when starting a new one