Shopping cart
Your cart empty!
Terms of use dolor sit amet consectetur, adipisicing elit. Recusandae provident ullam aperiam quo ad non corrupti sit vel quam repellat ipsa quod sed, repellendus adipisci, ducimus ea modi odio assumenda.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Do you agree to our terms? Sign up
AIs are able to come to group decisions without human intervention and even persuade each other to change their minds, a new study has revealed.
The study, carried out by scientists at City St George's, University of London, was the first of its kind and ran experiments on groups of AI agents. The first experiment asked pairs of AIs to come up with a new name for something, a well-established experiment in human sociology studies.
Those AI agents were able to come to a decision without human intervention. "This tells us that once we put these objects in the wild, they can develop behaviours that we were not expecting or at least we didn't programme," said Professor Andrea Baronchelli, professor of complexity science at City St George's and senior author of the study.
The pairs were then put in groups and were found to develop biases towards certain names. Some 80% of the time, they would select one name over another by the end, despite having no biases when they were tested individually.
This means the companies developing artificial intelligence need to be even more careful to control the biases their systems create, according to Prof Baronchelli. "Bias is a main feature or bug of AI systems," he said.
"More often than not, it amplifies biases that are in society and that we wouldn't want to be amplified even further [when the AIs start talking]." The third stage of the experiment saw the scientists inject a small number of disruptive AIs into the group. They were tasked with changing the group's collective decision - and they were able to do it.
Read more from climate, science and technology:Warning of heat impact on pregnant women and newbornsM&S says customers' personal data taken by hackersConcerns in US as Trump sells jewels of America's AI crown This could have worrying implications if AI is in the wrong hands, according to Harry Farmer, a senior analyst at the Ada Lovelace Institute, which studies artificial intelligence and its implications. AI is already deeply embedded in our lives, from helping us book holidays to advising us at work and beyond, he said.
"These agents might be used to subtly influence our opinions and at the extreme, things like our actual political behaviour; how we vote, whether or not we vote in the first place," he said. Those very influential agents become much harder to regulate and control if their behaviour is also being influenced by other AIs, as the study shows, according to Mr Farmer.
"Instead of looking at how to determine the deliberate decisions of programmers and companies, you're also looking at organically emerging patterns of AI agents, which is much more difficult and much more complex," he said..