A Machine Learning Systems That Called Neural Networks Perform Tasks by Analyzing Huge Volumes of Data

0
3887

Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. These machine learning systems continually learn and readjust to be able to carry out the task set out before them. Understanding how neural networks work helps researchers to develop better applications and uses for them.


At the 2017 Conference on Empirical Methods on Natural Language Processing earlier this month, MIT researchers demonstrated a new general-purpose technique for making sense of neural networks that are able to carry out natural language processing tasks where they attempt to extract data written in normal text opposed to something of a structured language like database-query language.

The new technique works great in any system that reads the text as input and produces symbols as the output.  One such example of this can be seen in an automatic translator. It works without the need to access any underlying software too. Tommi Jaakkola is Professor of Electrical Engineering and Computer Science at MIT and one of the authors on the paper.  He says, “I can’t just do a simple randomization. And what you are predicting is now a more complex object, like a sentence, so what does it mean to give an explanation?”

As part of the research, Jaakkola, and colleague David Alvarez-Melis, an MIT graduate student in electrical engineering and computer science and first author on the paper, used a black-box neural net in which to generate test sentences to feed black-box neural nets.  The duo began by teaching the network to compress and decompress natural sentences.  As the training continues the encoder and decoder get evaluated simultaneously depending on how closely the decoder’s output matches up with the encoder’s input.


Neural nets work on probabilities. For example, an object-recognition system could be fed an image of a cat, and it would process that image as it saying 75 percent probability of being a cat, while still having a 25 percent probability that it’s a dog. Along with that same line, Jaakkola and Alvarez-Melis’ sentence compressing network has alternative words for each of those in a decoded sentence along with the probability that each is correct. So, once the system has generated a list of closely related sentences they’re then fed to a black-box natural language processor. This then allows the researchers to analyze and determine which inputs have an effect on which outputs.

During the research, the pair applied this technique to three different types of a natural language processing system. The first one inferred the way in which words were pronounced; the second was a set of translators, and the third was a simple computer dialogue system which tried to provide adequate responses to questions or remarks.  In looking at the results, it was clear and pretty obvious that the translation systems had strong dependencies on individual words of both the input and output sentences. A little more surprising, however, was the identification of gender biases in the texts on which the machine translation systems were trained. The dialogue system was too small to take advantage of the training set.


“The other experiment we do is in flawed systems,” says Alvarez-Melis. “If you have a black-box model that is not doing a good job, can you first use this kind of approach to identify problems?  A motivating application of this kind of interpretability is to fix systems, to improve systems, by understanding what they’re getting wrong and why.”

More News to Read