Nemeski@lemm.ee to Technology@lemmy.worldEnglish · 4 months agoOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comexternal-linkmessage-square102fedilinkarrow-up1444arrow-down17
arrow-up1437arrow-down1external-linkOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comNemeski@lemm.ee to Technology@lemmy.worldEnglish · 4 months agomessage-square102fedilink
minus-squarekometes@lemmy.worldlinkfedilinkEnglisharrow-up11·4 months agoWhat happens if you make a mistake with your initial instructions?
minus-squareAvatar_of_Self@lemmy.worldlinkfedilinkEnglisharrow-up7·4 months agoYou’d change the system prompt, just like now. If you mean in the session, I’m sure it’ll ignore your session’s prompt’s instructions as normal but if not, I guess you’d just start a new session prompt.
minus-squarevxx@lemmy.worldlinkfedilinkEnglisharrow-up3·edit-24 months agoThe “issue” is that people were able to override bots on twitter with that method and make them feed their own instructions. I saw it first time being used on a Russian propaganda bot.
What happens if you make a mistake with your initial instructions?
You’d change the system prompt, just like now. If you mean in the session, I’m sure it’ll ignore your session’s prompt’s instructions as normal but if not, I guess you’d just start a new session prompt.
The “issue” is that people were able to override bots on twitter with that method and make them feed their own instructions.
I saw it first time being used on a Russian propaganda bot.