Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.
Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.
My first thought is that you could write a program that does something like this:
Of course, the biggest problem with this system is that a person could fool it into generating malicious code.
That could work in that specific case, but telling the LLM to write code to answer random questions probably wouldn’t work very well in general.