Thursday, 29 November 2018

A Deep Neural Network Sentence Level Classification Method with Context Information

Today we're looking at the work done within the group which was reported in EMNLP2018: "A Deep Neural Network Sentence Level Classification Method with Context Information", authored by Xingyi Song, Johann Petrak and Angus Roberts, all of the University of Sheffield.

Xingyi, S., Petrak, J. & Roberts, A. A Deep Neural Network Sentence Level Classification Method with Context Information. in EMNLP2018 – 2018 Conference on Empirical Methods in Natural Language Processing 00, 0-000 (2018).

Understanding complex bodies of text is a difficult task, especially those in which the context of a statement can greatly influence its meaning. While methods exist that examine the context surrounding a phrase, the authors present a new approach that makes use of much larger contexts than these. This allows for greater confidence in the results of such a method, especially when dealing with complicated subject matter. Medical records are one such area in which complex judgements on appropriate treatments are made across several sentences. It is vital therefore to fully understand the context of each individual statement to be able to collate meaning and accurately understand the sentiment of the entire body of text and the conclusion that should be drawn from it

Although grounded in its use in the medical domain, this new technique can be demonstrated to be more widely applicable. An evaluation of the technique in non-medical domains showed a solid improvement of over six percentage points over its nearest competitor technique despite requiring 33% less training time.
This technique examines not only the subject sentence, but also context on either side of it. This embedding is encoded using an adapted FOFE technique that allows for large contexts without crippling amounts of additional computation.

But how does it work? At its core, this novel method analyses not only the target sentence but also an amount of text on either side of it. This context is encoded using an adapted Fixed-size Ordinally Forgetting Encoding (FOFE), turning it from a variable length context into a fixed length embedding. This is processed along with the target, before being concatenated and post-processed to produce an output. 

Experimentation on this new technique was then performed, in comparison to peer techniques. These results showed markedly improved performance compared to LSTM-CNN methods, despite taking almost the same amount of time. The performance of this new Context-LSTM-CNN technique even surpassed an L-LSTM-CNN method despite a substantial reduction in required time. 
Average test accuracy and training time. Best values are marked as bold, standard deviations in parentheses
In conclusion, a new technique is presented, Context-LSTM-CNN, that combines the strength of LSTM and CNN with the lightweight context encoding algorithm, FOFE. The model shows a consistent improvement over either a non-context based model and a LSTM context encoded model, for the sentence classification task.

Thursday, 22 November 2018

Adapted TextRank for Term Extraction: A Generic Method of Improving Automatic Term Extraction Algorithms

Zhang, Z., Petrak, J. & Maynard, D. Adapted TextRank for Term Extraction: A Generic Method of Improving Automatic Term Extraction Algorithms. in SEMANTiCS 2018 – 14th International Conference on Semantic Systems 00, 0-000 (2018).

This work has been carried out in the context of the EU KNOWMAK project, where we're developing tools for multi-topic classification of text against an ontology, in order to attempt to map the state of European research output in key technologies.

Automatic Term Extraction (ATE) is a fundamental technique used in computational linguistics for recognising terms in text. Processing the collected terms in a text is a key step in understanding the content of the text.  There are many different ATE methods, but these all tend to work well only in a one specific domain.  In other words, there is no universal method which produces consistently good results, and so we have to choose an appropriate method for the domain being targeted.

In this work, we have developed a novel method for ATE which addresses two major limitations: the fact that no single ATE method consistently performs well across all domains, and the fact that the majority of ATE methods are unsupervised. Our generic method, AdaText, improves the accuracy of existing ATE methods, using existing lexical resources to support them, by revising the TextRank algorithm.
After being given a target text, AdaText:
  1. Selects a subset of words based on their semantic relatedness to a set of seed words or phrases relevant to the domain, but not necessarily representative of the terms within the target text. 
  2. It then applies an adapted TextRank algorithm to create a graph for these words, and computes a text-level TextRank score for each selected word. 
  3. Finally, these scores are used to revise the score of a term candidate previously computed by an ATE method. 
This technique was trialled using a variety of parameters (such as the threshold of semantic similarity to select words, as described in step two) over two distinct datasets (GENIA and ACLv2, comprising Medline abstracts and abstracts from ACL respectively). We also tested it with a wide variety of state of the art ATE methods, including modified TFIDF, CValue, Basic, RAKE, Weirdness, LinkProbability, X2, GlossEx and PositiveUnlabeled.




The figures show a sample of performances in different datasets and using different ATE techniques. The base performance of the ATE method is represented by the blachttps://gate.ac.uk/g8/page/show/2/sale/images/blog/Results-by-AdaText-compared-against-the-base-ATE-methods-y-axis-average-PK-for-all.pngk horizontal line. The horizontal axis represents the semantic similarity threshold used in step 1. The vertical axis shows average P@K for all five Ks considered.

This new generic combination approach can consistently improve the performance of the ATE method by 25 points, which is a significant increase. However, there is still room for improvement. In future work, we aim to optimise the selection of words from the TextRank graph, work on expanding TextRank to a graph of both words and phrases, and to explore how the size and source of the seed lexicon affects the performance of AdaText.