Large language models, like LLMs, are being explored by the U.S. Department of Defense for military use, but concerns arise about outsourcing critical decisions to machines. Experts warn that while LLMs have military utility, their behavior is unpredictable in high-stakes scenarios, especially in war and escalation decisions. The Pentagon's increasing reliance on artificial intelligence raises questions about the limitations of LLMs in replacing human judgment, particularly in crucial situations.
The gist: AI LLMs might and are inventing their own words to express super-human concepts and extend our language as they see fit. We might be very close from simply being unable to understand them anymore. https://t.co/HIp6jJdLlY
.@MLamparth and @JackieGSchneid discuss how militaries are exploring the use of large language models—and examine the limitations and dangers of relying on LLMs and other AI-enabled decision-making tools on the battlefield. https://t.co/tQZcCHAywD
Large Language Models and the Paradox of the Unsayable Are LLMs pushing the bounds of language beyond the inconceivable? As language models become more advanced, we're seeing them stretch the limits of language itself in an attempt to communicate transcendent ideas that… https://t.co/J7Xbf5g1Au
In a new essay @foreignaffairs, @JackieGSchneid and Max Lamparth argue that while Large Language Models are useful, their actions are difficult to predict, and they can make dangerous escalatory calls: https://t.co/UeZAz1u8Nm
Despite the Pentagon’s growing enthusiasm for artificial intelligence and large language models, LLMs cannot serve as direct substitutes for human decision-making, especially in high-stakes situations, warn @MLamparth and @JackieGSchneid. https://t.co/DzCZH023ei
.@MLamparth and @JackieGSchneid explain how the U.S. military is experimenting with large language models—and warn of the potential dangers of outsourcing high-stakes decisions to artificial intelligence systems: https://t.co/L4RHrxi6jz
Large language models can have useful military purposes—but their use should be regulated so they don’t end up making dangerous calls in high-stakes situations, warn @MLamparth and @JackieGSchneid. https://t.co/qyi1KHXtGH
“Militaries must realize that, fundamentally, a large language model’s behavior can never be completely guaranteed, especially when making rare and difficult choices about escalation and war.” https://t.co/ifRLSPTgz0
why LLMs can't write great books (3 slightly technical lessons from the trenches) 1/ AI is a poor generalizer LLMs (Large Language Models) are trained on diverse datasets and are excellent at handling specific instances or repeating patterns they've seen during training... but…
“Large language models have plenty of uses within the U.S. Department of Defense, but it is dangerous to outsource high-stakes choices to machines,” write @MLamparth and @JackieGSchneid. https://t.co/y2CZhKg2a6
I am sympathetic to the position that AI systems need stronger reasoning capabilities. But LLMs are also trained on a lot of expert-generated text and structured data, so to reproduce that data they would need to be more accurate than the average non-expert human. https://t.co/yNPOVBbMmL