I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • GamingChairModel@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    6 months ago

    The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct

    I would argue that it is quite obviously correct, but that the interesting question is whether humans are in the same category (I would argue yes).

    • HorseRabbit@lemmy.sdf.org
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      6 months ago

      People sometimes act like the models can only reproduce their training data, which is what I’m saying is wrong. They do generalise.

      During training the models are trained to predict the next word, but after training the network is always effectively interpolating between the training examples it has memorised. But this interpolation doesn’t happen in text space but in a very high dimensional abstract semantic representation space, a ‘concept space’.

      Now imagine that you have memorised two paragraphs that occupy two points in concept space. And then you interpolate between them. This gives you a new point, potentially unseen during training, a new concept, that is in some ways analogous to the two paragraphs you memorised, but still fundamentally different, and potentially novel.