If you want to sort through vast numbers of digital images, or classify mind-numbing quantities of written information by topic, you're best relying on artificial intelligence (AI) systems called neural networks, which look for patterns in data and train themselves to make predictions based on their observations.
But when it comes to high-stakes fields such as medical information, where the cost of making a mistake or a wrong prediction is potentially life-threatening, we humans sometimes are reluctant to trust the answers that the programs come up with. That's because neural nets use machine learning, in which they train themselves on how to figure things out, and our puny meat brains can't see the process.
Advertisement
While machine learning methods "are flexible and typically result in accurate predictions, they reveal little in human understandable terms about why a particular prediction is made," says Tommi Jaakkola, a professor of electrical engineering and computer science at Massachusetts Institute of Technology, via email.
If you're a cancer patient trying to pick treatment options based on predictions of how your disease might progress, or an investor trying to figure out what to do with your retirement savings, blindly trusting a machine can be a little scary — especially since we've taught the machines to make decisions, but we don't have a good way of observing exactly how they're making them.
But have no fear. In a new scientific paper, Jaakkola and other researchers at the Massachusetts Institute of Technology have developed a method to check the answers that neural nets come up with. Think of it as the machine-learning equivalent of writing out your math problems on a chalkboard to show your work.
As an MIT press release details, AI neural networks actually mimic the structure of the human brain. They're composed of a lot of processing nodes that, like our neurons, join forces and combine their computational power to tackle problems. In the process, they engage in what researchers call "deep learning," passing training data from node to node, and then correlating it with whatever type of classification that the neural net is trying to learn how to do. The results are continuously modified to improve, almost the same way that humans learn through trial and error over time.
The big problem is that even computer scientists who program the networks can't really watch what's going on with the nodes, which has made it tough to sort out how computers actually make their decisions.
"We do not attempt to explain the internal workings of a complex model," Jaakkola explains. "Instead, we force the model to operate in a manner that enables a human to easily verify whether the prediction was made on the right basis."
"Our method learns to generate a rationale for each prediction. A rationale is a concise piece of text, easy for a human to check, that alone suffices to make the same prediction. To achieve this, we divided the overall model architecture into two separable components — generator and encoder. The generator selects a rationale — such as a piece of text — and passes it on to the encoder to make a prediction. The combination is learned to work together as a predictor."
"Thus, even though our generator and encoder are themselves complex deep learning methods, the combined model is forced to make its prediction in a manner that is directly verifiable since the prediction is based on the selected rationale," writes Jaakkola.
In their paper, the scientists had some fun by using their system to classify reviews from a beer aficionado website, based on the brews' attributes such as aroma, palate and appearance. "The beer review dataset already had annotated sentences pertaining to specific aspects of the products so we could directly compare automatically generated rationales to human selections," says Jaakkola. In the experiment, they found that the neural net agreed with human annotations between 80 and 96 percent of the time, depending upon how specific the characteristic was.
Advertisement