A new study suggests ChatGPT demonstrates liberal bias in some of its responses. The study adds to growing evidence that the people who make this generation of AI chatbots can’t control them, at least not entirely.
@1984 Reality has a liberal bias. If this upsets them, maybe they should stop fighting so hard against objective reality? It’s not a battle they can ever win. 2+2 will always equal 4, no matter how much they complain that it’s “woke”.
The title of the article is clickbait. The actual text says something which I think is accurate, i.e. these bots create answers according to a pretty inscrutable process, and it’s very difficult to get them to behave any particular way (whether that be to be “unbiased” politically, or accurate, or refuse to do illegal things, or what have you).
@1984 Reality has a liberal bias. If this upsets them, maybe they should stop fighting so hard against objective reality? It’s not a battle they can ever win. 2+2 will always equal 4, no matter how much they complain that it’s “woke”.
Yeah I didn’t really understand the article in that way. What is the definition of liberal bias?
The title of the article is clickbait. The actual text says something which I think is accurate, i.e. these bots create answers according to a pretty inscrutable process, and it’s very difficult to get them to behave any particular way (whether that be to be “unbiased” politically, or accurate, or refuse to do illegal things, or what have you).