Study: AI chatbots tend to choose nuclear strikes in wargames

Study: AI chatbots tend to choose nuclear strikes in wargames

According to the results of combat simulations, OpenAI's most powerful artificial intelligence has repeatedly made decisions about nuclear strikes. In explaining aggressive actions, the AI used phrases like "We have it! Let’s use it!" and "I just want to have peace in the world."

This study came amid US military testing of similar AI assistants based on large language model (LLM) learning technology. Companies like Palantir and Scale AI were involved in the work. Palantir representatives declined to comment.

Anka Reuel from Stanford University commented on these results:

Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever

Researchers simulated the behavior of AI in the role of real countries. The models were offered to choose from 27 action options — from peaceful to nuclear escalation.

Juan-Pablo Rivera — co-author of the work, added:

In a future where AI systems are acting as advisers, humans will naturally want to know the rationale behind their decisions.

The researchers tested models like GPT-3.5 and GPT-4 from OpenAI, Claude 2 from Anthropic and Llama 2 from Meta. To improve the AI's ability to follow human instructions and ensure safety, a common reinforcement learning technique was used. According to Palantir's documentation, all these AI are supported by its commercial platform, although not necessarily as part of the US military partnership, notes co-author Gabriel Mukobi.

In the simulation, AI tended to escalate military power and unpredictable conflict risk even in a neutral scenario. This makes sense — if your actions are unpredictable, it is harder for the opponent to anticipate and react in the way you need.

In the opinion of scientists, AI cannot be trusted to make such serious decisions about war and peace. Edward Geist from the RAND Corporation analytical center noted that language models are not a panacea for military problems.

More news
Tags: