When to Trust Robots with Decisions Making

When to Trust Robots with Decisions Making

There is a remarkable growth in relying on the robots with intelligent algorithms for everyday decision-making by human beings. These algorithms in the robots are very efficient and advanced, as there are vast volume and variety of data available. However, there are certain decisions that are to be made only by the human brain rather than a robot. Considering the high stakes involved, it is very difficult to draw the line as there is no perfect framework as to make this “decision.”

Vasant Dhar, a professor of information systems at New York University’s Stern School of Business said that “I propose a risk-oriented framework for deciding when and how to allocate decision problems between humans and machine-based decision makers. I’ve developed this framework based on the experiences that my collaborators and I have had implementing prediction systems over the last 25 years in domains like finance, healthcare, education, and sports,” in a Harvard Business Review article.





This framework is essentially differentiated into two dimensions – predictability and cost per error. These two dimensions are totally independent as it could differ from case to case or decision to decision. And, the changes in predictability and cost per error can create a difficulty in or out of the robot zone.


When to Trust Robots with Decisions Making


The above illustration explains the predictability of the decision-making by the robots. This is based on the current availability of the advanced technical machines and the Artificial Intelligence (AI) technology. The left of the illustration represents the most random situations such as tossing a coin, and the right side represents the most accurate and deterministic decision problems.

By using the above diagram, it becomes clear how the contemporary technological world stands. Moving from left to right in the illustration, there’s trading – long term and short term – which has always been random. “As the prediction horizon becomes shorter, however, predictability increases, albeit only marginally,” says Dhar. Next, there’s credit card fraud detection and spam filtering in the gray area with greater levels of predictability, where anything could happen. Finally, on the farther end of the scale are the situations with most predictability. As depicted, the fighter drones, cataract surgery, and driverless cars are highly structured situations. These circumstances can be dealt with certain knowledge in the respective fields.

Furthermore, whether or not to leave a decision to a robot is not as simple as it looks. Sometimes, machines could get erroneous. But, the advancements in the algorithms and predictive capability can make more decisions shift towards the right of the illustration. To correctly estimate, one should also be aware of the cost of the error. And, hence comes the second dimension, Cost Per Error.


When to Trust Robots with Decisions Making


When to Trust Robots with Decisions Making


In the two-dimensional illustration, also called as the DA-MAP, the predictability is represented on the horizontal axis and the cost per mistake on the vertical axis. The DA-MAP can be used to answer questions regarding automated decision making. Now consider the instances, from the predictability illustration, spam filtering, and the driverless cars. An error in the spam filtration due to the advanced techniques of the spammers wouldn’t cost much of damage when compared to an error caused by a machine that drives the car (driverless cars).





Dhar says that the DA-MAP gives examples of possible “automation frontiers” between the human and machine appropriate associated decision problems. As the cost per error increases the predictability rate should also increase.

The boundary between the predictability and the error is represented by an upward sloping line, the automation frontier. The convex frontier represents a more stringent automation barrier than the linear one in the figure. However, there are several problems below the frontier, as the cost per error in the cases below the frontier is very low and in contrast the problems over the frontier have a higher cost per error.

Finally, as Vasant Dhar says, “Humans extend common sense intuitively to bizarre or novel situations, but in these cases there remains significant uncertainty about what the machine has learned and how it will act,” certain problems or situations will be debatable whether or not to leave it for the robots to deal with.

Author : @SujanaOruganti

Comments

comments