• afraid_of_zombies@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Your entire argument boils down to because it wasn’t able to do a calculation it can do none. It wasn’t able/willing to do X given Y so therefore it isn’t capable of any time of inference.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Your entire argument boils down to because it wasn’t able to do a calculation it can do none.

      Except that it isn’t just “a calculation”. LLMs show consistent lack of ability to handle an essential logic property called “equivalence”, and this example shows it.

      And yes, LLMs, plural. I’ve provided ChatGPT 3.5 output, but feel free to test this with GPT4, Gemini, LLaMa, Claude etc.

      Just be sure to not be testing instead if the LLM in question has a “context” window, like some muppet ITT was doing.

      It wasn’t able/willing to do X given Y so therefore it isn’t capable of any time of inference.

      Emphasis mine. That word shows that you believe that they have a “will”.

      Now I get it. I understand it might deeply hurt the feelings of people like you, since it’s some unfaithful one (me) contradicting your oh-so-precious faith on LLMs. “Yes! They’re conscious! They’re sentient! OH HOLY AGI, THOU ART COMING! Let’s burn an effigy!” [insert ridiculous chanting]

      Sadly I don’t give a flying fuck, and examples like this - showing that LLMs don’t reason - are a dime a dozen. I even posted a second one in this thread, go dig it. Or alternatively go join your religious sect in Reddit LARPs as h4x0rz.

      /me snaps the pencil
      Someone says: YOU MURDERER!