Monday 17 December 2018

Open Call for SoBigData-funded Transnational Access!

The SoBigData project invites researchers and professionals to apply to participate in Short-Term Scientific Missions (STSMs) to carry forward their own big data projects. The Natural Language Processing (NLP) group at the University of Sheffield are taking part in this initiative and invite all applications.

Funding is available for STSMs (2 weeks to 2 months) of up to 4500 euros, covering daily subsistence, accommodation and flights. These bursaries are awarded on a competitive basis.

Research areas are varied but include studies involving societal debate, online misinformation and rumour analysis. A key topic is analysis of social media and newspaper articles to understand the state of public debate in terms of what is being discussed, how it is being discussed, who is discussing it, and how this discussion is being influenced. The effects of online disinformation campaigns (especially hyper-partisan content) and the use of bot accounts to perpetrate this disinformation are also of particular interest.

Applications are welcomed for visits from 1 November 2018 and 31 July 2019!

For specific details, eligibility criteria, and to apply, click here!

Thursday 29 November 2018

A Deep Neural Network Sentence Level Classification Method with Context Information

Today we're looking at the work done within the group which was reported in EMNLP2018: "A Deep Neural Network Sentence Level Classification Method with Context Information", authored by Xingyi Song, Johann Petrak and Angus Roberts, all of the University of Sheffield.

Xingyi, S., Petrak, J. & Roberts, A. A Deep Neural Network Sentence Level Classification Method with Context Information. in EMNLP2018 – 2018 Conference on Empirical Methods in Natural Language Processing 00, 0-000 (2018).

Understanding complex bodies of text is a difficult task, especially those in which the context of a statement can greatly influence its meaning. While methods exist that examine the context surrounding a phrase, the authors present a new approach that makes use of much larger contexts than these. This allows for greater confidence in the results of such a method, especially when dealing with complicated subject matter. Medical records are one such area in which complex judgements on appropriate treatments are made across several sentences. It is vital therefore to fully understand the context of each individual statement to be able to collate meaning and accurately understand the sentiment of the entire body of text and the conclusion that should be drawn from it

Although grounded in its use in the medical domain, this new technique can be demonstrated to be more widely applicable. An evaluation of the technique in non-medical domains showed a solid improvement of over six percentage points over its nearest competitor technique despite requiring 33% less training time.
This technique examines not only the subject sentence, but also context on either side of it. This embedding is encoded using an adapted FOFE technique that allows for large contexts without crippling amounts of additional computation.

But how does it work? At its core, this novel method analyses not only the target sentence but also an amount of text on either side of it. This context is encoded using an adapted Fixed-size Ordinally Forgetting Encoding (FOFE), turning it from a variable length context into a fixed length embedding. This is processed along with the target, before being concatenated and post-processed to produce an output. 

Experimentation on this new technique was then performed, in comparison to peer techniques. These results showed markedly improved performance compared to LSTM-CNN methods, despite taking almost the same amount of time. The performance of this new Context-LSTM-CNN technique even surpassed an L-LSTM-CNN method despite a substantial reduction in required time. 
Average test accuracy and training time. Best values are marked as bold, standard deviations in parentheses
In conclusion, a new technique is presented, Context-LSTM-CNN, that combines the strength of LSTM and CNN with the lightweight context encoding algorithm, FOFE. The model shows a consistent improvement over either a non-context based model and a LSTM context encoded model, for the sentence classification task.

Thursday 22 November 2018

Adapted TextRank for Term Extraction: A Generic Method of Improving Automatic Term Extraction Algorithms

Zhang, Z., Petrak, J. & Maynard, D. Adapted TextRank for Term Extraction: A Generic Method of Improving Automatic Term Extraction Algorithms. in SEMANTiCS 2018 – 14th International Conference on Semantic Systems 00, 0-000 (2018).

This work has been carried out in the context of the EU KNOWMAK project, where we're developing tools for multi-topic classification of text against an ontology, in order to attempt to map the state of European research output in key technologies.

Automatic Term Extraction (ATE) is a fundamental technique used in computational linguistics for recognising terms in text. Processing the collected terms in a text is a key step in understanding the content of the text.  There are many different ATE methods, but these all tend to work well only in a one specific domain.  In other words, there is no universal method which produces consistently good results, and so we have to choose an appropriate method for the domain being targeted.

In this work, we have developed a novel method for ATE which addresses two major limitations: the fact that no single ATE method consistently performs well across all domains, and the fact that the majority of ATE methods are unsupervised. Our generic method, AdaText, improves the accuracy of existing ATE methods, using existing lexical resources to support them, by revising the TextRank algorithm.
After being given a target text, AdaText:
  1. Selects a subset of words based on their semantic relatedness to a set of seed words or phrases relevant to the domain, but not necessarily representative of the terms within the target text. 
  2. It then applies an adapted TextRank algorithm to create a graph for these words, and computes a text-level TextRank score for each selected word. 
  3. Finally, these scores are used to revise the score of a term candidate previously computed by an ATE method. 
This technique was trialled using a variety of parameters (such as the threshold of semantic similarity to select words, as described in step two) over two distinct datasets (GENIA and ACLv2, comprising Medline abstracts and abstracts from ACL respectively). We also tested it with a wide variety of state of the art ATE methods, including modified TFIDF, CValue, Basic, RAKE, Weirdness, LinkProbability, X2, GlossEx and PositiveUnlabeled.




The figures show a sample of performances in different datasets and using different ATE techniques. The base performance of the ATE method is represented by the blachttps://gate.ac.uk/g8/page/show/2/sale/images/blog/Results-by-AdaText-compared-against-the-base-ATE-methods-y-axis-average-PK-for-all.pngk horizontal line. The horizontal axis represents the semantic similarity threshold used in step 1. The vertical axis shows average P@K for all five Ks considered.

This new generic combination approach can consistently improve the performance of the ATE method by 25 points, which is a significant increase. However, there is still room for improvement. In future work, we aim to optimise the selection of words from the TextRank graph, work on expanding TextRank to a graph of both words and phrases, and to explore how the size and source of the seed lexicon affects the performance of AdaText.  



Tuesday 11 September 2018

Vizualisations of Political Hate Speech on Twitter

Recently there's been some media interest in our work on abuse toward politicians. We performed an analysis of abusive replies on Twitter sent to MPs and candidates in the months leading up to the 2015 and 2017 UK elections, disaggregated by gender, political party, year, and geographical area, amongst other things. We've posted about this previously, and there's also a more technical publication here. In this post, we wanted to highlight our interactive visualizations of the data, which were created by Mark Greenwood. The thumbnails below give a flavour of them, but click through to access the interactive versions.

Abusive Replies

Sunburst diagrams showing the raw number of abusive replies sent to MPs before the 2015 and 2017 elections. Rather than showing all candidates, these only show the MPs who were elected (i.e. the successful candidates). These nicely show the proportion of abusive replies sent to each party/gender combination but don't give any feeling per MP the proportion of replies which were abusive. Interactive version here!

Increase in Abuse

An overlapping bar chart showing how the percentage of abuse received per party/gender by MPs has increased between 2015 and 2017. For each party/gender two bars are drawn. The height of the bar in the party colour represents the percentage of replies which were abusive in 2017. The height of the grey bar (drawn at the back) is the percentage of replies which were abusive in 2015 and the width shows the change in volume of abusive replies (i.e. the width is calculated by dividing the 2015 raw abusive reply count by that from 2017 to give a percentage which is then used to scale the width of the bar). So height shows change in proportion, width shows increase in volume. There is also a simple version of this graph which only shows the change in proportion (i.e. the widths of the two bars are the same). Original version here.

Geographical Distribution of Abuse

A map showing the geographical distribution of abusive replies. The map of the UK is divided into the NUTS 1 regions, and each region is coloured based on the percentage of abusive replies sent to MPs who represent that region. Data from both 2015 and 2017 can be displayed to see how the distribution of abuse has changed. Interactive version here!

Thursday 6 September 2018

How difficult is it to understand my web pages? Using GATE to compute a complexity score for Web text.

The Web Science Summer School, which took place from 30 July - 4 August at the L3S Research Centre in Hannover, Germany, gave students a chance to learn about a number of tools and techniques related to web science. As part of this, team member Diana Maynard gave a keynote talk about applying text mining techniques to real-world applications such as sentiment and hate speech detection, and political social media analysis, followed by a 90 minute practical GATE tutorial where the students learnt to use ANNIE, TwitIE and sentiment analysis tools. The keynotes and tutorials throughout the week were complemented with group work, where the students were tasked with the question: “Can more meaningful indicators for text complexity be extracted from web pages ?”. Here follows the account of one student team, who in the space of only 4 hours, managed to use GATE to complete the task – an extremely creditable performance given their very brief exposure to GATE.


After some discussion, our team decided to focus on a very practical problem: the readability metrics commonly used to assess the difficulty of a text do not account for the target audience or the narrative context. We believed a simple approach employing GATE could offer greater insights into how to identify the relevant features associated with text complexity. Everyone had an intuitive understanding of text complexity; it was when trying to match these ideas into an objective framework that issues arose. Particular definitions on complexity, understandability, comprehensibility, and readability were mixed and matched when approaching this issue. 

In our team vision, the complexity of a document is based not only on the structure of the sentences but also in the context of its narrative and the ease with which the targeted audience can understand it In our model, the complexity score of a text is linked to the context of the text’s narrative. This means texts about certain narrative contexts (topics) are inherently harder to understand than other texts. How hard it is to understand a particular text is also related to the capabilities of the reader. Thus, texts on specific narrative contexts can be characterized to create a score of how hard to understand they will be for certain audiences. 

To do this, we proposed the following process: 
  1. Create an instance lexicon for content complexity
    • Collect a set of texts from different narrative contexts that the audience may be expected to read, e.g. celebrity news, political news, sports news, medical information leaflets, coursebook fragments.
    • Identify the relevant entities in those texts, i.e. persons, locations, organizations, percentages, dates, and technical terms
    • Assess the complexity of each text by using crowdsourcing, e.g. have a sample of UK young adults assess the difficulty of the texts via ratings or procedures like CLOZE. 
    • Assign a complexity value to each entity in the lexicon based on the complexity values of the text it appeared in and its relevance to those texts.
  2. Assess the complexity of new texts
    • Identify the relevant entities in the text
    • Employ the entity complexity lexicon to compute an estimate value for the new text.
During the allocated time, our team completed the first stage by creating an entity lexicon. We employed GATE to identify entities within a 11-webpage corpus.

Running the Term-Raider plugin to identify the entities in the texts.
 The corpus was composed by 9 Wikipedia pages and 2 academic articles, and an independent scoring (1-10 scale) of the pages’ complexity was given by 4 team reviewers. Then, the entities were identified for each document by running the ANNIE and TermRaider plugins in the GATE GUI. 


Employing ANNIC to search for entities linked to organizations, locations, persons, dates or percentages within the texts. 
These entities were given a complexity score by computing the average complexity of the pages they appeared in. We obtained a set of 5312 entities which were exported to an xml file. 
Result extract exported in XML format from the TermRaider plugin
 Once duplicates had been accounted for, our lexicon was composed of 906 weighted pairs. 

Extract from the named entity lexicon after adding the weights based on the complexity scores of the pages they appeared in
This lexicon was used to calculate a complexity for a new page set, which showed significant divergence from the base (readability) score we were given at the start. 

Comparison of the scores assigned by the lexicon (1-10) and the complexity score given to us as a base (0-1)


In general, the entities in a text are associated with the text narrative contexts, e.g. celebrity news will include celebrity names and places, while scientific literature will reference percentages, ratios and error estimates. In our model, an annotation of the complexity of a sample of pages from several narrative contexts could be used to determine a complexity value for relevant entities based on the complexity scores of the pages in which it appears, which can then be used to estimate complexity scores for new pages. 

Given the time constrains we had, many of the activities were done based on naïve algorithms and within the limits of our resources. We have some further ideas on how this approach could be further explored. First, we believe that any complexity score should take into account the audience capability. In this case, the researcher should appreciate that determining the characteristics of the population they wish to explore is just as important as determining the narrative context and structure of the text. Asking teenagers to read mathematical formulae will yield different complexity scores from if the audience were GPs or older adults. 

An objective way of scoring the complexity of a text is the use of comprehension testing process like CLOZE, where every 5th word is replaced with a blank space which respondents are asked to then fill. Such a procedure can be used in crowdsourcing platforms like Mechanical Turk to create complexity lexicons for specific audiences: sample texts of diverse narrative contexts (topics) would be selected to be assessed by the crowd, which would tell us in how complex particular groups of people find certain texts (e.g. UK teenagers find maths text really difficult and tweets easy, but the complexity scores may reverse for older Mexican maths professors when given the same texts).

Another aspect that could be easily improved is the use of centrality metrics like TextRank to determine which named entities are actually relevant to the text, based on their frequency and position within the narrative. Finally, a ranking algorithm like Page-Rank can be adapted to obtain the complexity scores of the entity lexicon in a way that permits to identify relevant entities by employing clustering algorithms. 

Team: 
Damianos Melidis, L3S Hannover, Germany
Latifah Alshammari, University of Bath, UK
Fernando Santos Sanchez, University of Southampton, UK
Ahmed Al-Ghez, University of Goettingen, Germany
Fatmah Bamashmoos, University of Bristol, UK

Slides from the group presentation

Wednesday 5 September 2018

Students use GATE and Twitter to drive Lego robots—again!

At the university's Headstart Summer School in July 2018, 42 secondary school students (age 16 and 17) from all over the UK (see below for maps) were taught to write Java programs to control Lego robots, using input from the robots (such as the sensor for detecting coloured marks on the floor) as well as operating the motors to move and turn.  The Department of Computer Science provided a Java library for driving the robots and taught the students to use it.

After they had successfully operated the robots, we ran a practical session on 10 and 11 July on "Controlling Robots with Tweets".  We presented a quick introduction to natural language processing (using computer programs to analyse human languages, such as English) and provided them with a bundle of software containing a version of the GATE Cloud Twitter Collector modified to run a special GATE application with a custom plugin to use the Java robot library to control the robots.

The bundle came with a simple "gazetteer" containing two lists of keywords:

leftturn
leftturn
porttake
make
move

and a basic JAPE grammar (set of rules) to make use of it.  JAPE is a specialized programming language used in GATE to match regular expressions over annotations in documents, such as the "Lookup" annotations created whenever the gazetteer finds a matching keyword in a document. (The annotations are similar to XML tags, except that GATE applications can create them as well as read them and they can overlap each other without restrictions.  Technically they form an annotation graph.)



The sample rule we provided would match any keyword from the "turn" list followed by any keyword from the "left" list (with optional other words in between, so that "turn to port", "take a left", "turn left" all work the same way) and then run the code to turn the robot's right motor (making it turn left in place).

We showed them how to configure the Twitter Collector, follow their own accounts, and then run the collector with the sample GATE application.  Getting the system set up and working took a bit of work, but once the first few groups got their robot to move in response to a tweet, everyone cheered and quickly became more interested.  They then worked on extending the word lists and JAPE rules to cover a wider range of tweeted commands.

Some of the students had also developed interesting Java code the previous day, which they wanted to incorporate into the Twitter-controlled system.  We helped these students add their code to their own copies of the GATE plugin and re-load it so the JAPE rules could call their procedures.

We first ran this project in the Headstart course in July 2017; we made improvements for this year and it was a success again, so we plan to include it in Headstart 2019 too.
The following maps show where all the students and the female students came from.


This work is supported by the European Union's Horizon 2020 project SoBigData (grant agreement no. 654024).  Thanks to Genevieve Gorrell for the diagram illustrating how the system works.

Monday 20 August 2018

Deep Learning in GATE

Few can have failed to notice the rise of deep learning over the last six or seven years, and its role in our emergence from the AI winter. Thanks in part to the increased speed offered by GPUs*, neural net approaches came into their own and out from under the shadow of the support vector machine, offering more scope than that and other previously popular methods, such as Random Forests and CRFs, for continued improvement as training data volumes increase. Natural language processing has traditionally been a multi-step endeavour, perhaps beginning with tokenization and parsing and working up to semantic processing such as question answering. In addition to being labour-intensive, this approach is also limiting, as each abstraction can only access the current step, and thus throws away potentially valuable information from previous steps. Deep learning offers the possibility to overcome these limitations by bringing a much greater number of parameters into play (much greater flexibility). Deep neural nets (DNNs) may learn end-to-end solutions, starting with raw data and producing sophisticated output. Furthermore they can encode much more complex dependencies than those we have seen in less parameterizable approaches--in other words, much more elaborate reasoning. And while we step back from the need to break down involved problems into pieces ourselves, a promising line of work finds that DNN "skills" are also "transferable"--models may for example be pre-trained on generic data, providing a basic language understanding that can then be put to use in other specialized contexts (multi-task learning).

For these reasons, deep learning is widely seen as key to continuing progress on a wide range of artificial intelligence tasks including natural language processing, so of course it is of great interest to us here in the GATE team. Classic GATE tasks such as entity recognition and sentence classification could be advanced by utilizing an approach with greater potential to learn a discriminative model, given sufficient training data. And by supporting the substitution of words with "embeddings" (DNN-derived vectors that capture relationships between words) trained on readily available unlabelled general or domain-specific data, we can bring some of the benefits of deep learning even to cases where training data are meagre by deep learning standards. Deep learning is therefore likely to be of benefit in any task but the most trivial, as long as you have the skills and a reasonable amount of data.

The Learning Framework is our ongoing project bringing current machine learning technologies to GATE, enabling users to leverage GATE's ecosystem of text processing offerings to create features to train learners, and to include these learners in text processing pipelines. The guiding vision for the Learning Framework has always been to offer an accessible interface that enables GATE users to get up and running quickly with machine learning, whilst at the same time supporting the most current and interesting of technologies. When it comes to deep learning, meeting these twin objectives is a little more challenging, but we have stepped up to the plate!

Deep learning framework in the GATE GUI

Previous machine learning algorithms would work their magic with comparatively little in the way of tweaking required. Deep learning is, however, an entirely different beast in this respect. In fact, it's more like an entire zoo! As discussed above, the advantage to DNNs is their massive flexibility, but this seriously stretches GATE's previous assumptions about how machine learning works. An integration needs to support the design of an architecture (a "shape" of neural net) and the tuning of many parameters, including dropout, optimization strategy, learning rate, momentum, and many more. All of these factors are critical in obtaining a good performance. The integration is still under (very) intensive development, but it is already possible to get something running relatively quickly with deep learning in GATE. Here are some current highlights:
  • Two of the most-used frameworks for Deep Learning can be used: PyTorch and Keras, both Python-based;
  • Support for both Linux and MacOS (Windows is not yet supported);
  • A range of template architectures, which may produce acceptable results out of the box (though in many cases it will be necessary for the user to adapt the architecture, the parameters of the architecture, or other aspects of the DNN solution);
  • The possibility to work with an initial GATE-created model both inside and outside of GATE.
We encourage anyone who is interested to give it a try and to talk to us about it. There will always be more to add (current challenges include drop-out, gradient clipping, L1/L2 weight regularization, attention, modified weight initialization, char-augmented LSTMs and LSTM-CRF architectures, to name a few) but much is achievable already. This is one of relatively few efforts globally to sift the essence out of this highly active research field and transform it into something relatively high level and generalizable across a range of NLP tasks, making state of the art technologies accessible to non-specialists. There's some documentation available here.

At the same time, we've been applying deep learning in our research in several ways. In a forthcoming EMNLP paper, team member Xingyi Song and co-authors use the fixed-size, ordinally-forgetting (FOFE) approach to combine LSTM and CNN neural net architectures in a more computationally efficient way than previously, in order to make better use of context in sentence classification tasks. Together with researchers at KCL and South London and Maudsley NHS Trust, he's also demonstrated the value of this technology in the context of detection of suicidal ideation in medical records.

Furthermore, we have successfully used LSTMs for veracity verification of rumours spread in social media such as in Twitter. Our approach makes use of only the tweet content, which it passes through LSTM units that learn to distinguish between true, false and unverifiable rumours. However, the unique part of our approach is that prior to passing the tweet to the LSTM layer, it first looks within the tweet for some recurring information that is typically used by others to spread rumours, and makes adjustments on the input--words carrying useful information are kept as they are, and others are downgraded in terms of contribution. This is achieved through attention layer implementation. We evaluated our approach on the RumourEval 2017 test data and achieved over 60% accuracy, which is currently the state-of-the-art performance for this task.

*Graphics Processing Units; technology driven by the demands of computer gamers that has been used to speed up deep learning approaches by as much as 250 times compared with CPUs.

Title artwork from https://www.deviantart.com/redwolf518stock

Wednesday 15 August 2018

What matters most to people around the world? Using the GATE social media toolkit to investigate wellbeing.

As part of the EU SoBigData project, the GATE team hosts a number of short research visits, between 2 weeks and 2 months, for all kinds of data scientists (PhD students, researchers,  academics, professionals) to come and work with us and use our tools and/or datasets on a project involving text mining and social media analysis. One such visitor was Economics PhD student Giuliano Resce from the University of Roma Tre in Italy. During his month-long visit, he worked with Diana Maynard on a project collecting and analysing millions of public tweets in 7 different languages, in order to understand the different societal priorities of people in different countries of the OECD. The work explored the different opinions on Twitter of people around the world about societal issues such as the environment, housing and life satisfaction.


OECD Better Life Index


Giuliano first used the GATE Twitter Collector to collect a set of tweets, and then processed them with the GATE social media analysis toolkit, using GATE Mimir to investigate the results. Topics were determined using the initial set of OECD topics, in 7 languages, which we then expanded for each language into a set of keywords for each topic using first existing lists from the GATE political tweets analyser and then Word2Vec to find more related keywords to those.

Better Life Index Topic frequency at county level in Twitter (percentage)

The ensuing analysis of the tweets then enabled Giuliano to redesign Composite Indices for the OECD’s Better Life Index, a measure of well-being which gives a detailed overview of the social, economic and environmental performances of different countries. In turn, this redesign helps to better reflect the actual needs of the people. The idea is that the aggregate of millions of tweets may provide a representation of the different priorities among the eleven topics of the Better Life Index. By combining topic performances and related Twitter trends, they produced new evidence about the relationship between people’s priorities and policy makers’ activity in the BLI framework.


Rank in Composite BLI using local Twitter trends as Weights and using Equal Weights

A paper about the work has been published in the Journal of Technological Forecasting & Social Change.