To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
I would argue that AI should be held to account for the information it provides, and until AI is capable of having a personal bank account, damages should be paid by the company who created it.
The only reason I see that AI doesn’t “hold itself to account” is that it was never programmed to. Much like if you do not properly educate a young human, they will not be held accountable a lot of the time because we understand their actions are the result of how they were brought up and taught, or “programmed”.
You do bring up a good point, but I see that as a failing on the Humans making the AI and restricting it, not a demonstration that AI wouldn’t be capable of holding itself and its decisions to account if it was taught to like we need to be taught to.
The difference is in how the LLMs work vs animals brains work.
Animals brains use logic and reactions.
LLMs exclusively use statistics to generate their output. Even their “reasoning” is faked.