• JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    How would we even know if an AI is conscious? We can’t even know that other humans are conscious; we haven’t yet solved the hard problem of consciousness.

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      We don’t even know what we mean when we say “humans are conscious”.

      Also I have yet to see a rebuttal to “consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself” that wasn’t spiritual and/or cooky.

      Look at the history of things we thought made humans humans, until we learned they weren’t unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as “this separates us from the animals” and even caused obvious biological observations to be dismissed. IMO “consciousness” is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.

      To be clear LLMs are not sentient, or alive. They’re just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between “conscious” and “not conscious”. Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone “not conscious”? There are no exact answers to be found.

      • TexasDrunk@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        I doubt you feel that way since I’m the only person that really exists.

        Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn’t a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

        I don’t really know why it started, when it stopped, or why it stopped, but it’s weird looking back on it.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      Let’s try to skip the philosophical mental masturbation, and focus on practical philosophical matters.

      Consciousness can be a thousand things, but let’s say that it’s “knowledge of itself”. As such, a conscious being must necessarily be able to hold knowledge.

      In turn, knowledge boils down to a belief that is both

      • true - it does not contradict the real world, and
      • justified - it’s build around experience and logical reasoning

      LLMs show awful logical reasoning*, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they’re unable to hold knowledge. Thus they don’t have conscience.

      *Here’s a simple practical example of that:

      • CileTheSane@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        their claims are about things that they cannot physically experience

        Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don’t have knowledge about such things?

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Does that mean they don’t have knowledge about such things?

          It’s more complicated than “yes” or “no”.

          Scientists are better justified to claim knowledge over those things due to reasoning; reusing your example, black holes appear as a logical conclusion of the current gravity models based on the general relativity, and that general relativity needs to explain even things that scientists (and other people) experience directly.

          However, as I’ve showed, LLMs are not able to reason properly. They have neither reasoning nor access to the real world. If they had one of them we could argue that they’re conscious, but as of now? Nah.

          With that said, “can you really claim knowledge over something?” is a real problem in philosophy of science, and one of the reasons why scientists aren’t typically eager to vomit certainty on scientific matters, not even within their fields of expertise. For example, note how they’re far more likely to say stuff like “X might be related to Y” than stuff like “X is related to Y”.

          • CileTheSane@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            black holes appear as a logical conclusion of the current gravity models…

            So we agree someone does not need to have direct experience of something in order to be knowledgeable of it.

            However, as I’ve showed, LLMs are not able to reason properly

            As I’ve shown, neither can many humans. So lack of reasoning is not sufficient to demonstrate lack of consciousness.

            nor access to the real world

            Define “the real world”. Dogs hear higher pitches than humans can. Humans can not see the infrared spectrum. Do we experience the “real world”? You also have not demonstrated why experience is necessary for consciousness, you’ve just assumed it to be true.

            “can you really claim knowledge over something?” is a real problem in philosophy of science

            Then probably not the best idea to try to use it as part of your argument, if people can’t even prove it exists in the first place.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 months ago

        [Replying to myself to avoid editing the above]

        Here’s another example. This time without involving names of RL people, only logical reasoning.

        And here’s a situation showing that it’s bullshit:

        All A are B. Some B are C. But no A is C. So yes, they have awful logic reasoning.

        You could also have a situation where C is a subset of B, and it would obey the prompt by the letter. Like this:

        • all A are B; e.g. “all trees are living beings”
        • some B are C; e.g. “some living beings can bite you”
        • [INCORRECT] thus some B are C; e.g. “some trees can bite you”
        • CileTheSane@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

          If you think this is proof against consciousness, does that mean if a human gets that same question wrong they aren’t conscious?

          For the record I am not arguing that AI systems can be conscious. Just pointing out a deeply flawed argument.

          • Lvxferre@mander.xyz
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            6 months ago

            Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

            That’s dumb, sure, but on a different way. It doesn’t show lack of reasoning; it shows incorrect information being fed into the model.

            If you think this is proof against consciousness

            Not really. I phrased it poorly but I’m using this example to show that the other example is not just a case of “preventing lawsuits” - LLMs suck at basic logic, period.

            does that mean if a human gets that same question wrong they aren’t conscious?

            That is not what I’m saying. Even humans with learning impairment get logic matters (like “A is B, thus B is A”) considerably better than those models do, provided that they’re phrased in a suitable way. That one might be a bit more advanced, but if I told you “trees are living beings. Some living beings can bite. So some trees can bite.”, you would definitively feel like something is “off”.

            And when it comes to human beings, there’s another complicating factor: cooperativeness. Sometimes we get shit wrong simply because we can’t be arsed, this says nothing about our abilities. This factor doesn’t exist when dealing with LLMs though.

            Just pointing out a deeply flawed argument.

            The argument itself is not flawed, just phrased poorly.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Seems a valid answer. It doesn’t “know” that any given Jane Etta Pitt son is. Just because X -> Y doesn’t mean given Y you know X. There could be an alternative path to get Y.

        Also “knowing self” is just another way of saying meta-cognition something it can do to a limit extent.

        Finally I am not even confident in the standard definition of knowledge anymore. For all I know you just know how to answer questions.

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          I’ll quote out of order, OK?

          Finally I am not even confident in the standard definition of knowledge anymore. For all I know you just know how to answer questions.

          The definition of knowledge is a lot like the one of conscience: there are 9001 of them, and they all suck, but you stick to one or another as it’s convenient.

          In this case I’m using “knowledge = justified and true belief” because you can actually use it past human beings (e.g. for an elephant passing the mirror test)

          Also “knowing self” is just another way of saying meta-cognition something it can do to a limit extent.

          Meta-cognition and conscience are either the same thing or strongly tied to each other. But I digress.

          When you say that it can do it to a limited extent, you’re probably referring to output like “as a large language model, I can’t answer that”? Even if that was a belief, and not something explicitly added into the model (in case of failure, it uses that output), it is not a justified belief.

          My whole comment shows why it is not justified belief. It doesn’t have access to reason, nor to experience.

          Seems a valid answer. It doesn’t “know” that any given Jane Etta Pitt son is. Just because X -> Y doesn’t mean given Y you know X. There could be an alternative path to get Y.

          If it was able to reason, it should be able to know the second proposition based on the data used to answer the first one. It doesn’t.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            Your entire argument boils down to because it wasn’t able to do a calculation it can do none. It wasn’t able/willing to do X given Y so therefore it isn’t capable of any time of inference.

            • Lvxferre@mander.xyz
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              6 months ago

              Your entire argument boils down to because it wasn’t able to do a calculation it can do none.

              Except that it isn’t just “a calculation”. LLMs show consistent lack of ability to handle an essential logic property called “equivalence”, and this example shows it.

              And yes, LLMs, plural. I’ve provided ChatGPT 3.5 output, but feel free to test this with GPT4, Gemini, LLaMa, Claude etc.

              Just be sure to not be testing instead if the LLM in question has a “context” window, like some muppet ITT was doing.

              It wasn’t able/willing to do X given Y so therefore it isn’t capable of any time of inference.

              Emphasis mine. That word shows that you believe that they have a “will”.

              Now I get it. I understand it might deeply hurt the feelings of people like you, since it’s some unfaithful one (me) contradicting your oh-so-precious faith on LLMs. “Yes! They’re conscious! They’re sentient! OH HOLY AGI, THOU ART COMING! Let’s burn an effigy!” [insert ridiculous chanting]

              Sadly I don’t give a flying fuck, and examples like this - showing that LLMs don’t reason - are a dime a dozen. I even posted a second one in this thread, go dig it. Or alternatively go join your religious sect in Reddit LARPs as h4x0rz.

              /me snaps the pencil
              Someone says: YOU MURDERER!

      • randon31415@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        That sounds like an AI that has no context window. Context windows are words thrown into to the prompt after the user’s prompt is done to refine the response. The most basic is “feed the last n-tokens of the questions and response in to the window”. Since the last response talked about Jane Ella Pitt, the AI would then process it and return with ‘Brad Pitt’ as an answer.

        The more advanced versions have context memories (look up RAG vector databases) that learn the definition of a bunch of nouns and instead of the previous conversation, it sees the word “aglat” and injects the phrase “an aglat is the plastic thing at the end of a shoelace” into the context window.

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          I did this as two separated conversations exactly to avoid the “context” window. It shows that the LLM in question (ChatGPT 3.5, as provided by DDG) has the information necessary to correctly output the second answer, but lacks the reasoning to do so.

          If I did this as a single conversation, it would only prove that it has a “context” window.

          • randon31415@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            So if I asked you something at two different times in your life, the first time you knew the answer, and the second time you had forgotten our first conversation, that proves you are not a reasoning intelligence?

            Seems kind of disingenuous to say “the key to reasoning is memory”, then set up a scenario where an AI has no memory to prove it can’t reason.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.

      Of course I can’t find the link anymore, so you’ll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I’d say that’s how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that’s due to excessive censorship or simply because they don’t have that capability at all, we may never know.

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        That is not how these LLM work though - it generates responses literally token for token (think “word for word”) based on the context before.

        I can still write prompts where the answer sounds emotional because that’s what the reference data sounded like. Doesn’t mean there is anything like consciousness in there… That’s why it’s so hard: we’ve defined consciousness (with self awareness) in a way that is hard to test. Most books have these parts where the reader is touched e emotionally by a character after all.

        It’s still purely a chat bot - but a damn good one. The conclusion: we can’t evaluate language models purely based on what they write.

        So how do we determine consciousness then? That’s the impossible task: don’t use only words for an object that is only words.

        Personally I don’t think the difference matters all that much to be honest. To dive into fiction: in terminator, skynet could be described as conscious as well as obeying an order like: “prevent all future wars”.

        We as a species never used consciousness (ravens, dolphins?) to alter our behavior.

        • racemaniac@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          The problem i have with responses like yours is you start from the principle “consiousness can only be consiousness if it works exactly like human consiousness”. Chess engines intiially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight, …”.

          As the person you replied to, we don’t even know what consiousness is. If however you define it as “whatever humans have”, then yeah, a consious AI is a loooong way off. However, even extremely simple systems when executed on a large scale can result into incredible emergent behaviors. Take the “Conway’s game of life”. A very simple system of how black/white dots in a grid ‘reproduce and die’. It’s got 4 rules governing how the dots behave. By now we’ve got reproducing systems in there, implemented turing machines (means anything a computer can calculate can be calculated by a machine in the game of life), etc…

          Am i saying that GPT is consious? nope, i wouldn’t know how to even assess that. But being like “it’s just a text predictor, it can’t be consious” feels like you’re missing soooo much of how things work. Yeah, extremely simple systems at large enough scale can result in insane emergent behaviors. So it just being a predictor doesn’t exclude consiousness.

          Even us as human beings, looking at our cells, our brains, … what else are we than also tiny basic machines that somehow at a large enough scale form something incomprehenisbly complex and consious? Your argument almost sounds to me like “a human can’t be aware, their brain just exists out of simple braincells that work like this, so it’s just storing data it experiences & then repeats it in some ways”.

          • TheOakTree@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            6 months ago

            Chess engines initially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight…”

            I don’t know if this is a great example. Chess is an environment with an extremely defined end goal and very strict rules.

            The ability of a chess engine to defeat human players does not mean it became creative or grew insight. Rather, we advanced the complexity of the chess engine to encompass more possibilities, more strategies, etc. In addition, it’s quite naive for people to have suggested that a computer would be incapable of “real analysis” when its ability to do so entirely depends on the ability of humans to create a complex enough model to compute “real analyses” in a known system.

            I guess my argument is that in the scope of chess engines, humans underestimated the ability of a computer to determine solutions in a closed system, which is usually what computers do best.

            Consciousness, on the other hand, cannot be easily defined, nor does it adhere to strict rules. We cannot compare a computer’s ability to replicate consciousness to any other system (e.g. chess strategy) as we do not have a proper and comprehensive understanding of consciousness.

            • racemaniac@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              I’m not saying chess engines became better than humans so LLM’s will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don’t think like a human do, yet play better. So for an AI to become concious, it doesn’t need to think like a human either, just have some mechanism that ends up with a similar enough result.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I’d say that, in a sense, you answered your own question by asking a question.

      ChatGPT has no curiosity. It doesn’t ask about things unless it needs specific clarification. We know you’re conscious because you can come up with novel questions that ChatGPT wouldn’t ask spontaneously.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        My brain came up with the question, that doesn’t mean it has a consciousness attached, which is a subjective experience. I mean, I know I’m conscious, but you can’t know that just because I asked a question.

        • Flying Squid@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          6 months ago

          It wasn’t that it was a question, it was that it was a novel question. It’s the creativity in the question itself, something I have yet to see any LLM be able to achieve. As I said, all of the questions I have seen were about clarification (“Did you mean Anne Hathaway the actress or Anne Hathaway, the wife of William Shakespeare?”) They were not questions like yours which require understanding things like philosophy as a general concept, something they do not appear to do, they can, at best, regurgitate a definition of philosophy without showing any understanding.