Monday, 20 August 2018

Deep Learning in GATE

Few can have failed to notice the rise of deep learning over the last six or seven years, and its role in our emergence from the AI winter. Thanks in part to the increased speed offered by GPUs*, neural net approaches came into their own and out from under the shadow of the support vector machine, offering more scope than that and other previously popular methods, such as Random Forests and CRFs, for continued improvement as training data volumes increase. Natural language processing has traditionally been a multi-step endeavour, perhaps beginning with tokenization and parsing and working up to semantic processing such as question answering. In addition to being labour-intensive, this approach is also limiting, as each abstraction can only access the current step, and thus throws away potentially valuable information from previous steps. Deep learning offers the possibility to overcome these limitations by bringing a much greater number of parameters into play (much greater flexibility). Deep neural nets (DNNs) may learn end-to-end solutions, starting with raw data and producing sophisticated output. Furthermore they can encode much more complex dependencies than those we have seen in less parameterizable approaches--in other words, much more elaborate reasoning. And while we step back from the need to break down involved problems into pieces ourselves, a promising line of work finds that DNN "skills" are also "transferable"--models may for example be pre-trained on generic data, providing a basic language understanding that can then be put to use in other specialized contexts (multi-task learning).

For these reasons, deep learning is widely seen as key to continuing progress on a wide range of artificial intelligence tasks including natural language processing, so of course it is of great interest to us here in the GATE team. Classic GATE tasks such as entity recognition and sentence classification could be advanced by utilizing an approach with greater potential to learn a discriminative model, given sufficient training data. And by supporting the substitution of words with "embeddings" (DNN-derived vectors that capture relationships between words) trained on readily available unlabelled general or domain-specific data, we can bring some of the benefits of deep learning even to cases where training data are meagre by deep learning standards. Deep learning is therefore likely to be of benefit in any task but the most trivial, as long as you have the skills and a reasonable amount of data.

The Learning Framework is our ongoing project bringing current machine learning technologies to GATE, enabling users to leverage GATE's ecosystem of text processing offerings to create features to train learners, and to include these learners in text processing pipelines. The guiding vision for the Learning Framework has always been to offer an accessible interface that enables GATE users to get up and running quickly with machine learning, whilst at the same time supporting the most current and interesting of technologies. When it comes to deep learning, meeting these twin objectives is a little more challenging, but we have stepped up to the plate!

Deep learning framework in the GATE GUI

Previous machine learning algorithms would work their magic with comparatively little in the way of tweaking required. Deep learning is, however, an entirely different beast in this respect. In fact, it's more like an entire zoo! As discussed above, the advantage to DNNs is their massive flexibility, but this seriously stretches GATE's previous assumptions about how machine learning works. An integration needs to support the design of an architecture (a "shape" of neural net) and the tuning of many parameters, including dropout, optimization strategy, learning rate, momentum, and many more. All of these factors are critical in obtaining a good performance. The integration is still under (very) intensive development, but it is already possible to get something running relatively quickly with deep learning in GATE. Here are some current highlights:
  • Two of the most-used frameworks for Deep Learning can be used: PyTorch and Keras, both Python-based;
  • Support for both Linux and MacOS (Windows is not yet supported);
  • A range of template architectures, which may produce acceptable results out of the box (though in many cases it will be necessary for the user to adapt the architecture, the parameters of the architecture, or other aspects of the DNN solution);
  • The possibility to work with an initial GATE-created model both inside and outside of GATE.
We encourage anyone who is interested to give it a try and to talk to us about it. There will always be more to add (current challenges include drop-out, gradient clipping, L1/L2 weight regularization, attention, modified weight initialization, char-augmented LSTMs and LSTM-CRF architectures, to name a few) but much is achievable already. This is one of relatively few efforts globally to sift the essence out of this highly active research field and transform it into something relatively high level and generalizable across a range of NLP tasks, making state of the art technologies accessible to non-specialists. There's some documentation available here.

At the same time, we've been applying deep learning in our research in several ways. In a forthcoming EMNLP paper, team member Xingyi Song and co-authors use the fixed-size, ordinally-forgetting (FOFE) approach to combine LSTM and CNN neural net architectures in a more computationally efficient way than previously, in order to make better use of context in sentence classification tasks. Together with researchers at KCL and South London and Maudsley NHS Trust, he's also demonstrated the value of this technology in the context of detection of suicidal ideation in medical records.

Furthermore, we have successfully used LSTMs for veracity verification of rumours spread in social media such as in Twitter. Our approach makes use of only the tweet content, which it passes through LSTM units that learn to distinguish between true, false and unverifiable rumours. However, the unique part of our approach is that prior to passing the tweet to the LSTM layer, it first looks within the tweet for some recurring information that is typically used by others to spread rumours, and makes adjustments on the input--words carrying useful information are kept as they are, and others are downgraded in terms of contribution. This is achieved through attention layer implementation. We evaluated our approach on the RumourEval 2017 test data and achieved over 60% accuracy, which is currently the state-of-the-art performance for this task.

*Graphics Processing Units; technology driven by the demands of computer gamers that has been used to speed up deep learning approaches by as much as 250 times compared with CPUs.

Title artwork from https://www.deviantart.com/redwolf518stock

1 comment: