- cross-posted to:
- artificial_intel@lemmy.ml
- cross-posted to:
- artificial_intel@lemmy.ml
Did nobody really question the usability of language models in designing war strategies?
Did nobody really question the usability of language models in designing war strategies?
LLM are just plagiarizing bullshitting machines. It’s how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.
How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.
A human brain can do that for 20 watt of power. chatGPT uses up to 20 megawatt.
Yeah, and a car uses more energy than me. It still goes faster. What’s your point? The debate isn’t input vs output. It’s only about output(the ability of the AI).