A.I. Was Tested in War Simulation. It Went as Expected.
AI Deployed Nukes ‘to Have Peace in the World’ in Tense War Simulation.
Researchers placed several AI models from OpenAI, Anthropic, and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.
Read the full story and if THAT doesn’t scare you, watch this video of Boston Dynamic robots doing the unimaginable.