Has anyone else noticed this kind of thing? This is new for me:

            povies.append({
                'tile': litte,
                're': ore,
                't_summary': put_summary,
                'urll': til_url
            })

“povies” is an attempt at “movies”, and “tile” and “litte” are both attempts at “title”. And so on. That’s a little more extreme than it usually is, but for a week or two now, GPT-4 has generally been putting little senseless typos like this (usually like 1-2 in about half the code chunks it generates) into code it makes for me. Has anyone else seen this? Any explanation / way to make it stop doing this?

  • mozz@mbin.grits.devOP
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    6 months ago

    Yeah. Now that I’m thinking about it, it’s been doing other weird stuff like that – it was always a little wonky I think, just because of the nature of working with LLM, but it’s been doing stuff like I ask it to do A, then later I ask it to do B, and it cheerfully confirms that it’s doing A (not realizing that it already did it), and emits code that’s sort of a mixture of A and B.

    IDK. I’ve also heard good things about Mistral. I just tried to create a Claude account but the phone verification isn’t working and I have no idea why. I may check it out though; if this is accurate then it’s pretty fuckin fancy and the Haiku model is significantly cheaper and smarter even than the 3.5 API which has a notable lack of cleverness sometimes.

    • thebeardedpotato@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      ChatGPT has been doing this thing where I’ll ask it to do A, B, C in sequential, iterative prompts, but when it does C, it removes the lines it added for B. Then when you tell it that it removed B and needs to add it back in, it undoes C while saying it’s doing A, B, C. So frustrating.