For every up, there is a down. For every wrong, there is a right, and for every good thing, there is a bad to go with it. There are many benefits that autonomous weapons could bring and if you were ever in a position where you needed a weapon you’d be quite grateful to have one by your side. But, what if that weapon suddenly malfunctioned and aimed at you instead?
This is the main worry for a lot of people when they think about autonomous weapons, and it seems that the majority of people in 89 different nations agree too. The United Nations Convention on Conventional Weapons (CCW) now has 89 nations voting to get experts together twice in 2017 to discuss the implications that autonomous weapons bring, including choosing targets without human instruction. If the movement is successful, it could mean a total ban on weapons controlled by artificial intelligence.
Many of the world’s top tech leaders including Elon Musk, Steve Wozniak, and Stephen Hawking, are against these killer robots are recognize that something needs to be done and fast. They all agree that artificial intelligence in charge of warfare would undoubtedly put human civilians at risk considerably. And if one of these robots did accidentally kill an innocent human, who would be at fault? The programmer? The operator? The manufacturer?
When it comes to situations such as these, is preventative action not better than recovery action? Steve Goose works for Human Rights Watch and he says, “Once these weapons exist, there will be no stopping them. The time to act on a pre-emptive ban is now.” But, while other are still on the fence saying that these robots may decrease the number of casualties there are, it may be a while before an all-out ban goes ahead.
More News To Read