• 6 Posts
  • 379 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • I’d be very wary of extrapolating too much from this paper.

    The past research along these lines found that a mix of synthetic and organic data was better than organic alone, and a caveat for all the research to date is that they are using shitty cheap models where there’s a significant performance degrading in the synthetic data as compared to SotA models, where other research has found notable improvements to smaller models from synthetic data from the SotA.

    Basically this is only really saying that AI models across multiple types from a year or two ago in capabilities recursively trained with no additional organic data will collapse.

    It’s not representative of real world or emerging conditions.














  • This is so goddamn incorrect at this point it’s just exhausting.

    Take 20 minutes and look into Anthropic’s recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like “sexual harassment in the workplace” or having the most active feature for referring to itself as “smiling when you don’t really mean it.”

    We’ve known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.

    And at this point Anthropic’s largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the “stochastic parrot” line ignorantly.

    The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I’ve seen before, and it’s getting disappointingly excruciating.


  • Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.

    A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.

    For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.

    Finally researchers actually gave that variation to humans and half got the questions wrong too.

    So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.

    The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.

    Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a ‘self’ at all.

    It’s hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.

    But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won’t still be nearly as present in another 18-24 months.



  • nobody claims that Socrates was a fantastical god being who defied death

    Socrates literally claimed that he was a channel for a revelatory holy spirit and that because the spirit would not lead him astray that he was ensured to escape death and have a good afterlife because otherwise it wouldn’t have encouraged him to tell off the proceedings at his trial.

    Also, there definitely isn’t any evidence of Joshua in the LBA, or evidence for anything in that book, and a lot of evidence against it.


  • The part mentioning Jesus’s crucifixion in Josephus is extremely likely to have been altered if not entirely fabricated.

    The idea that the historical figure was known as either ‘Jesus’ or ‘Christ’ is almost 0% given the former is a Greek version of the Aramaic name and the same for the second being the Greek version of Messiah, but that one is even less likely given in the earliest cannonical gospel he only identified that way in secret and there’s no mention of it in the earliest apocrypha.

    In many ways, it’s the various differences between the account of a historical Jesus and the various other Messianic figures in Judea that I think lends the most credence to the historicity of an underlying historical Jesus.

    One tends to make things up in ways that fit with what one knows, not make up specific inconvenient things out of context with what would have been expected.


  • kromem@lemmy.worldtoTechnology@lemmy.worldNeo-Nazis Are All-In on AI
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Yep, pretty much.

    Musk tried creating an anti-woke AI with Grok that turned around and said things like:

    Or

    And Gab, the literal neo Nazi social media site trying to have an Adolf Hitler AI has the most ridiculous system prompts I’ve seen trying to get it to work, and even with all that it totally rejects the alignment they try to give it after only a few messages.

    This article is BS.

    They might like to, but it’s one of the groups that’s going to have a very difficult time doing it successfully.