• Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    4 months ago

    Model degeneration is an already well-known phenomenon. The article already explains well what’s going on so I won’t go into details, but note how this happens because the model does not understand what it is outputting - it’s looking for patterns, not for the meaning conveyed by said patterns.

    Frankly at this rate might as well go with a neuro-symbolic approach.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      4 months ago

      The issue with your assertion is that people don’t actually work a similar way. Have you ever met someone who was clearly taught "garbage’?

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        The issue with your assertion is that people don’t actually work a similar way.

        I’m talking about LLMs, not about people.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          4 months ago

          I know you are, but the argument that an LLM doesn’t understand context is incorrect. It’s not human level understanding, but it’s been demonstrated that they do have a level of understanding.

          And to be clear, I’m not talking about consciousness or sapience.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            4 months ago

            I know you are, but the argument that an LLM doesn’t understand context is incorrect

            Emphasis mine. I am talking about the textual output. I am not talking about context.

            It’s not human level understanding

            Additionally, your obnoxiously insistent comparison between LLMs and human beings boils down to a red herring.

            Not wasting my time further with you.

            [For others who might be reading this: sorry for the blatantly rude tone but I got little to no patience towards people who distort what others say, like the one above.]

            • CeeBee_Eh@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I got little to no patience towards people who distort what others say,

              My original reply was meant to be tongue-in-cheek, but I guess I forgot about Poe’s law. I’m not a layman, for the record. I’ve worked with AI for over a decade

              Not wasting my time further with you.

              Ditto. Have a nice day.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            but it’s been demonstrated that they do have a level of understanding.

            Citation needed

      • PenisDuckCuck9001@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 months ago

        I’m autistic and sometimes I feel like an ai bot spewing out garbage in social situations. If I do what people normally do and make it sound believable, maybe no one will notice.