It just feels too good to be true.
I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.
Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.
Are these concerns valid?
ChatGPT works off of a fixed size possible maximum prompt. This was originally about 4000 tokens. A token is about 4 characters or one short word, but its not quite that… https://platform.openai.com/tokenizer
“Tell me a story about a wizard” is 7 tokens. And so ChatGPT generates some text to tell you a story. That story is say, 1000 tokens long. You then ask it to “Tell me more of the story, and make sure you include a dinosaur.” (15 tokens). And you get another 1000 tokens. And repeat this twice more. At this point, the length of the entire chat history is about 4000 tokens.
ChatGPT then internally asks itself to summarize the entire 4000 token history into 500 tokens. Some of the simpler models can do this quite quickly - though they are imperfect. The thing is at point you’ve got 500 tokens which are a summarization of the 4 acts and of the story and the prompts that were used to generate it - but that’s lossy.
As you continue the conversation more and more, the summarizations become more and more lossy and the chat session will “forget” things.
deleted by creator