Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
A calculator does most of it too, but this is a LLM that can do lots of other things also, which is a big piece of the “general” part of AGI.
Richard Feynman said “You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”
We are close to a point where a computer that can hold all the problems in its “head” can test all of them against all of the tricks. I don’t know what math problems that starts to solve but I bet a few of them would be applicable to cryptology.
But then again, I have no idea what I’m talking about and just making bold guesses based on close to no information.
Even so, I think I’ll hold off on calling anything AGI until it can at least solve simple calculus problems with a 90% success rate (reproducibly). I think that’s a fair criteria, in my opinion.
And he said this in the 80s, when AI as we know it today was barely a concept.