featured-image

Austria said on Monday that fresh efforts should be carried out in order to regulate artificial intelligence, which is potentially leading to autonomous killing systems and AI weapons, aka killer robots.

Austria hosts a conference to address regulations on AI weapons

The remarks were made at the conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation,” which is hosted by the Austrian Federal Ministry for European and International Affairs in Vienna and started yesterday and will continue until today.

Ethical and legal challenges are arising as autonomous weapons systems that are able to kill without human assistance are becoming a reality with the rapid advancement in AI technology, and many countries are advocating to address this issue on an urgent basis.

Alexander Schallenberg, Austrian Foreign Minister, said,

“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control.”

Source: Reuters.

He was speaking at a meeting of international and non-governmental organizations and envoys who participated from 143 countries. As the discussions on this challenge have almost stalled on a global level mentioned the news agency, he also said in his opening remarks at the conference that,

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”

As very few outcomes that produce real-world impact have resulted from the years of talks at the United Nations, quite a number of participants at the Vienna conference said that the time for action is slipping out of hand.

AI can mistake a person’s head for a football

The president of the Red Cross’s International Committee, Mirjana Spoljaric, said that it is important to act on the issue, and that very fast, she was talking to a panel discussion at the conference.

Source: Statista.

Spoljaric also said that we should not give the responsibility for violence or for control over violence to machines and algorithms, as we do not want to see such setbacks increasing, as what we are already seeing within diverse violent scenarios are what she called “moral failures in the face of the international community.”

Diplomats said that AI has already infiltrated the battlefields, as it has been observed in the Ukraine war that drones are designed in a way that they find their direction and reach their target even when signal jamming systems cut their link to other operators.

The Israeli military used AI systems to help identify targets in Gaza, said the United States as it was reviewing media reports. But the irony is that the same United States kept supplying weapons to Israel, which was accused of careless use and killing innocent civilians.

In a keynote address, a tech investor and software engineer, Jaan Tallinn, said,

“We have already seen AI making selection errors in ways both large and small.’

Source: Reuters.

He was mentioning the ability that AI still struggles with, for example, mis recognizing the bald head of a referee as a football or self-driving cars killing pedestrians as they fail to recognize the jaywalk. But the agony is that countries like Israel are using this technology, which yet has to mature if it ever does, to identify targets among thousands of civilians around to kill the wanted ones. Tallinn also emphasized that the accuracy of these systems should be approached cautiously.