I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It’s worth a read if you want a peek behind the curtain on modern models.

  • Drewelite@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Ah yes, it must be the scientists specializing in machine learning studying the model full time who don’t understand it.