Ok, so most of us recognize that it needs to be done, but how do we actually go about regulating artificial intelligence (AI)? In a recent paper published in the journal Science Robotics, researchers Sandra Wachter, Brent Mittelstadt, and Luciano Floridi explain why policing robotics is so difficult. It will become harder to as AI becomes more widely used across the world.
Issue 1: Transparency; When AI use neural networks to teach themselves, no longer do you have full control in saying exactly how it behaves. But, this is a very popular form of AI currently and is quite often used for completing complex tasks such as analyzing images.
Issue 2: Construction; “Concerns about fairness, transparency, interpretability, and accountability are equivalent, have the same genesis, and must be addressed together, regardless of the mix of hardware, software, and data involved,” said the researchers. The line between robots and AI is rather blurred and no longer is it wise to consider them separate entities. Unfortunately, that means we’ll not only have to regulate AI but any robot that uses AI will also need to be monitored.
Issue 3: Robots and AI are diverse; In 2014, a group of Swiss artists managed to develop an AI called DarkNet Shopper that bought a number of illegal items from the darknet, including ecstasy and a Hungarian passport. Although the artists were caught by police, and the items confiscated, they were later cleared of any charges. But, if a group of researchers can do it, it shows it is possible, and only a matter of time before it’s used and abused by the wrong people. They may well get caught, but they also may not.
If we are to stand a chance against AI we need to get regulations and policies in place now. “The civil law resolution on robotics similarly struggles to define precise accountability mechanisms,” write the researchers.
More News to Read