To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
Let’s clear some terms. Intelligence and consciousness are separate things that our language tends to conflate. Consciousness is the interpretation of sensory input. Hallucinations are what happen when your consciousness is misinterpreting that data.
You actually hallucinate to a minor degree all the time. For instance, pareidolia often takes the form of seeing human faces in rocks and clouds. Our consciousness is really tuned to patterns that look like human faces, and it sometimes gets it wrong.
We can actually do this to image recognition models. A model was tuned to finding dogs in movies. It could then modify the movie to show what it thought was there. It was then deliberately overtrained, and it output a movie with dogs all over the place.
The models definitely have some level of consciousness. Maybe not a lot, but some.
This is what I like about AI research. We learn about our own minds while studying it. But capitalism isn’t using it in ways that are net helpful to humanity.
Depends on what one means by consciousness. The way I hear the term used most often - and how I use it myself - is to describe the fact of subjective experience. That it feels like something to be.
While I can’t definitively argue that none of our current AI systems are conscious to any degree, I’d still say that’s the case with extremely high probability. There’s just no reason to assume it feels like anything to be one of these systems, based on what we know about how they function under the hood.