Scientists are concerned about deception and manipulation by AI

Scientists are concerned about deception and manipulation by AI
Scientists are concerned about deception and manipulation by AI
--

NOS Newstoday, 4:00 PM

Artificial intelligence that bluffs during a card game to deceive the opponent. A chatbot that pretends to have an appointment with a friend to avoid another appointment. And even an AI system that ‘plays dead’ to avoid being discovered during an inspection. Artificial intelligence misleads and manipulates, scientists conclude in a new study.

Not the least AIs show this behavior. Cicero from Meta, the parent company of Facebook, behaves misleadingly and dishonestly while playing a game of Diplomacy. This despite the fact that the creators had instructed the AI ​​to be “broadly honest and helpful”, and never “purposefully underhanded”. AlphaStar from DeepMind, acquired by Google, also showed similar behavior.

This type of behavior is likely to arise if a strategy based on deception is the best way for an AI system to perform well in training, the researchers believe: misleading users helps the systems achieve their goals . In their study, the scientists brought together previous studies that focused on the spread of false information by AI. They publish their results in the magazine Patterns.

No innocent games

The misleading behavior of the AI ​​systems mainly took place while playing games, which can make it seem harmless and harmless. But according to the researchers, it is far from innocent: “This could lead to breakthroughs in AI in the future, which could degenerate into advanced forms of deception,” says lead researcher Peter Park of the American Technical University MIT in an accompanying press release.

“AI systems that learn to deceive and manipulate are definitely a concern,” said computer scientist Roman Yampolskiy of the University of Louisville, who was not involved in the research. According to him, the study exposes a fundamental problem regarding the safety of AI: “Optimizing systems does not have to correspond to human preferences.”

Yampolskiy, like Park, is concerned about the moment when these types of strategies will be used not only in games, but also in the real world. “This could potentially lead to harmful manipulations and deceptions in the political arena, in economic negotiations or in personal interactions.”

Computer scientist Stuart Russell from the University of California emphasizes the opacity of these types of AI systems. “We have no idea how they work. And even if we did, we wouldn’t be able to prove that they are safe – simply because they aren’t.”

In his view, the deception once again shows that strict requirements must be imposed on AI to be safe and fair. “It is then up to the developers to design systems that meet those requirements.”

Not the intention

But are the systems really misleading? Pim Haselager, professor of artificial intelligence at the Nijmegen Donders Institute, doesn’t think so. “You deceive with an intention. These systems are simply tools that carry out orders. They have no intention to deceive.”

Yampolskiy agrees: “AI systems have no desires or consciousness. It is better to view their actions as outcomes of how they are programmed and trained.”

According to Stuart Russell, on the other hand, it does not matter much whether a system actually intends to deceive. “If a system reasons about what it is going to say, taking into account the effect on the listener, and the benefit that can come from providing false information, then we might as well say that it is engaging in deception.”

But despite this philosophical difference of opinion, the gentlemen agree on risks. “Many mistakes and ‘deceptions’ by AI will occur in the near future,” Haselager says. “And even now. It is good to be aware of that, because forewarned counts for two.”

Yampolskiy uses even stronger language: “In cybersecurity we say ‘trust and verify’. In AI security we say ‘Never trust’.”

The article is in Dutch

Tags: Scientists concerned deception manipulation

-

PREV MediaTek shows Dimensity 9300+ SoC that can control LLMs on smartphones – Computer – News
NEXT Samsung Galaxy A55 review: excellent mid-range with stiff competition