British researchers are saying controlling A. I. will be nearly impossible to control. Sandra Wachter, Luciano Floridi, and Brent Mittelstadt are the researchers behind the new paper published in Science Robotics recently stating the reasons why A.I. will be uncontrollable. While it might sound like science fiction, artificial intelligence is becoming more and more a part of everyday lives. But, according to the researchers, the diversity of construction, transparency, and application in robotics will make regulation very difficult, if not impossible.
Robot Diversity
Back in 2014, Swiss police arrested a robot for making illegal purchases. The bot known as the Random DarkNet Shopper was built by a Swiss artist group, specifically to make a random purchase off the dark web. Unsupervised, the A. I. Bought ecstasy and a variety of counterfeit items, including a Hungarian passport. Neither the robot or the artists were charged with any wrongdoing.
It was deemed that the robot’s intent was not dangerous and was purely accidental. However, it’s not a stretch to believe that a similar A. I. could be built with a purpose of more menacing intent. The researchers write: “The inscrutability and the diversity of A.I. complicate the legal codification of rights, which, if too broad or narrow, can inadvertently hamper innovation or provide little meaningful protection.”
A. I. Transparency
The most popular way to build A. I. at the moment is with a neural net that allows it to learn quickly. However, this type of network obscures the reasoning behind its decision making. But it’s exactly this kind of system we need to do intricate and complicated tasks like analyzing images, making it the most widespread A. I. today.
If we can’t see what an A. I. like the Random DarkNet Shopper is really doing inside its neural net, identifying its actual intent and whether it is harmful or not is out of the question.
The Build
According to the researchers: “Concerns about fairness, transparency, interpretability, and accountability are equivalent, have the same genesis, and must be addressed together, regardless of the mix of hardware, software, and data involved.”
While robots and A. I. are not exactly the same thing, the effects between the two are closely intertwined. For example, if a facial recognition A. I. used by a robotic cop is flawed with racial discrimination or preference, then that robotic cop now, for all intents and purposes, is also racist. Now, robots with A. I. have to be regulated as well. What happens when A. I. can build A. I. directly, which Google proved was possible in May?
In conclusion, the way forward is delicate with precision regulations and concise interpretations of systems we are currently lacking. The researchers conclude, “The civil law resolution on robotics similarly struggles to define precise accountability mechanisms.” If these issues aren’t resolved, they will only worsen as A. I. and robots become more commonplace.
More News to Read