• eating3645@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Very difficult, it’s one of those “it’s a feature not a bug” things.

      By design, our current LLMs hallucinate everything. The secret sauce these big companies add is getting them to hallucinate correct information.

      When the models get it right, it’s intelligence, when they get it wrong, it’s a hallucination.

      In order to fix the problem, someone needs to discover an entirely new architecture, which is entirely conceivable, but the timing is unpredictable, as it requires a fundamentally different approach.

      • joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I have a weak and high level grasp of how LLMs work, but what you say in this comment doesn’t seem correct. No one is really sure why LLMs sometimes make things up, and a corollary of that is that no one knows how difficult (up to impossible) it might be to fix it.

        • eating3645@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Let me expand a little bit.

          Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word “Lemmy” would be “lem” + “my”.

          So let’s give our model the prompt “my favorite website is”

          It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.

          “My favorite website is”

          "My favorite website is "

          “My favorite website is lem”

          “My favorite website is lemmy”

          “My favorite website is lemmy.”

          “My favorite website is lemmy.org

          Woah what happened there? That’s not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the particular token space, there might be polluted training data, etc. These models are massive and so determine why it’s incorrect in this case is tough.

          But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it’s not a hallucination anymore.

        • BetaDoggo_@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.

          Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.

    • ollien@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      I’m no expert, so take what I’m about to say with a grain of salt.

      Fundamentally, a LLM is just a fancy autocomplete; there’s no source of knowledge it’s tapping into, it’s just guessing words (though it is quite good at it). Correspondingly, even if it did have a pool of knowledge, even that can’t be perfect, because the truth is never quite so black and white in many areas.

      In other words, hard.

    • Microsoft seems to have come up with a good middle road with their temperature setting in bing chat. You can pick between “factual” (mostly, still makes up shit but at least tries not to), “medium” for a bit more creative results, and “creative” for the less factual and more creative operations.

      You can’t prevent the random bullshit AI generates because it doesn’t understand concepts behind the words it’s generating. It’s picking the most likely continuations of words and sentences based on some tuning and it’s bound to get that stuff wrong.

      For some of the more unhinged AI issues (like Bing Chat accusing the user of gaslighting it, lying to it, etc.) can be fixed with tuning and post processing so the user never sees the AI go off the rails, ur when it comes to factual verification there’s very little you can do.

      Perhaps scientists will be able to solve this problem in the next generation of language models, but that next generation should be based on more than “the same concept but we increased the number of parameters and input text”.