Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary “correct” way to answer that question and it is still an incorrect parroting of something they have been told.
Ask yourself, what do you actually understand? How many topics could you be asked “why?” on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.
In some ways, you are correct. It is coming though. The psychological/neurological word you are searching for is “conceptualization”. The AI models lack the ability to abstract the text they know into the abstract ideas of the objects, at least in the same way humans do. Technically the ability to say “show me a chair” and it returns images of a chair, then following up with “show me things related to the last thing you showed me” and it shows couches, butts, tables, etc. is a conceptual abstraction of a sort. The issue comes when you ask “why are those things related to the first thing?” It is coming, but it will be a little while before it is able to describe the abstraction it just did, but it is capable of the first stage at least.
It doesn’t need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It’s not reasoning, but the reasoning was already done by humans. It’s not perfect of course since it’s still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.
Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter
I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.
I’m glad for you) out teacher liked to offer discussion only to shoot us down when we tried to understand something, i was like duh that’s what teachers are for, to help us understand, if teachers don’t do that, then it’s the same as watching YouTube lectures
This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can’t even match us in something as simple as driving a car.
Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.
The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.
Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary “correct” way to answer that question and it is still an incorrect parroting of something they have been told.
Ask yourself, what do you actually understand? How many topics could you be asked “why?” on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.
deleted by creator
https://en.m.wikipedia.org/wiki/Chinese_room
I think they’re wrong, as it happens, but that’s the argument.
deleted by creator
In some ways, you are correct. It is coming though. The psychological/neurological word you are searching for is “conceptualization”. The AI models lack the ability to abstract the text they know into the abstract ideas of the objects, at least in the same way humans do. Technically the ability to say “show me a chair” and it returns images of a chair, then following up with “show me things related to the last thing you showed me” and it shows couches, butts, tables, etc. is a conceptual abstraction of a sort. The issue comes when you ask “why are those things related to the first thing?” It is coming, but it will be a little while before it is able to describe the abstraction it just did, but it is capable of the first stage at least.
Some systems clearly do that though or are you just talking about llms?
deleted by creator
It’s like saying bro, this mouse can’t even type text if I don’t use an on screen keyboard
It doesn’t need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It’s not reasoning, but the reasoning was already done by humans. It’s not perfect of course since it’s still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.
I feel that knowing what you don’t know is the key here.
An LLM doesn’t know what it doesn’t know, and that’s where what it spouts can be dangerous.
Of course there’s a lot of actual people that applies to as well. And sadly they’re often in positions of power.
There are more than a couple research agents in development
We need something that can real time fact check without error that would fuck twitter up lol
Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter
I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.
I’m glad for you) out teacher liked to offer discussion only to shoot us down when we tried to understand something, i was like duh that’s what teachers are for, to help us understand, if teachers don’t do that, then it’s the same as watching YouTube lectures
This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can’t even match us in something as simple as driving a car.
Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.
The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.
AI can beat me in driving a car, and I have a degree.
Jokes on them. I don’t even calculate when I need to parrot. I am beyond such lowly needs.