Natural Language Processing (NLP) practitioners often have to deal with analyzing large corpora of unstructured documents and this is often a tedious process. Python tools like NLTK do not scale to large production data sets and cannot be plugged into a distributed scalable framework like Apache Spa...
Natural Language Processing (NLP) practitioners often have to deal with analyzing large corpora of unstructured documents and this is often a tedious process. Python tools like NLTK do not scale to large production data sets and cannot be plugged into a distributed scalable framework like Apache Spark or Apache Flink.
The Apache OpenNLP library is a popular machine learning based toolkit for processing unstructured text. Combining a permissive licence, a easy-to-use API and set of components which are highly customize and trainable to achieve a very high accuracy on a particular dataset. Built-in evaluation allows to measure and tune OpenNLP’s performance for the documents that need to be processed.
From sentence detection and tokenization to parsing and named entity finder, Apache OpenNLP has the tools to address all tasks in a natural language processing workflow. It applies Machine Learning algorithms such as Perceptron and Maxent, combined with tools such as word2vec to achieve state of the art results. In this talk, we’ll be seeing a demo of large scale Name Entity extraction and Text classification using the various Apache OpenNLP components wrapped into Apache Flink stream processing pipeline and as an Apache NiFI processor.
NLP practitioners will come away from this talk with a better understanding of how the various Apache OpenNLP components can help in processing large reams of unstructured data using a highly scalable and distributed framework like Apache Spark/Apache Flink/Apache NiFi.
Size: 1.96 MB
Language: en
Added: Jun 26, 2017
Slides: 36 pages
Slide Content
Large Scale Processing of Text
Suneel Marthi
DataWorks Summit 2017,
San Jose, California
@suneelmarthi
$WhoAmI
●Principal Software Engineer in the Office of Technology, Red Hat
●Member of Apache Software Foundation
●Committer and PMC member on Apache Mahout, Apache OpenNLP, Apache
Streams
What is a Natural Language?
What is a Natural Language?
Is any language that has evolved naturally in humans through
use and repetition without conscious planning or
premeditation
(From Wikipedia)
What is NOT a Natural Language?
Characteristics of Natural Language
Unstructured
Ambiguous
Complex
Hidden semantic
Ironic
Informal
Unpredictable
Rich
Most updated
Noise
Hard to search
and it holds most of human knowledge
and it holds most of human knowledge
and but it holds most of human knowledge
As information overload grows
ever worse, computers may
become our only hope for
handling a growing deluge of
documents.
MIT Press - May 12, 2017
What is Natural Language Processing?
NLP is a field of computer science, artificial intelligence and
computational linguistics concerned with the interactions
between computers and human (natural) languages, and, in
particular, concerned with programming computers to fruitfully
process large natural language corpora.(From Wikipedia)
???
How?
By solving small problems each time
A pipeline where an ambiguity type is solved, incrementally.
Sentence Detector
Mr. Robert talk is today at room num. 7. Let's go?
| | | | ❌
| | ✅
Tokenizer
Mr. Robert talk is today at room num. 7. Let's go?
|| | | | | | | || || | ||| | | ❌
| | | | | | | | || | | | | | ✅
By solving small problems each time
Each step of a pipeline solves one ambiguity problem.
Name Finder
<Person>Washington</Person> was the first president of the USA.
<Place>Washington</Place> is a state in the Pacific Northwest region
of the USA.
POS Tagger
Laura Keene brushed by him with the glass of water .
| | | | | | | | | | |
NNP NNP VBD IN PRP IN DT NN IN NN .
By solving small problems each time
A pipeline can be long and resolve many ambiguities
Lemmatizer
He is better than many others
| | | | | |
He be good than many other
Apache OpenNLP
Apache OpenNLP
Mature project (> 10 years)
Actively developed
Machine learning
Java
Easy to train
Highly customizable
Fast
Language Detector (soon)
Sentence detector
Tokenizer
Part of Speech Tagger
Lemmatizer
Chunker
Parser
....
Training Models for English
Corpus - OntoNotes (https://catalog.ldc.upenn.edu/ldc2013t19)
bin/opennlp TokenNameFinderTrainer.ontonotes -lang eng -ontoNotesDir
~/opennlp-data-dir/ontonotes4/data/files/data/english/ -model en-pos-ontonotes.bin
bin/opennlp POSTaggerTrainer.ontonotes -lang eng -ontoNotesDir
~/opennlp-data-dir/ontonotes4/data/files/data/english/ -model en-pos-maxent.bin
Training Models for Portuguese
Corpus - Amazonia (http://www.linguateca.pt/floresta/corpus.html)
bin/opennlp ChunkerTrainerME.ad -lang por -data amazonia.ad -model por-chunk.bin -encoding
ISO-8859-1
bin/opennlp TokenNameFinderTrainer.ad -lang por -data amazonia.ad -model por-ner.bin -encoding
ISO-8859-1
Name Finder API - Detect Names
NameFinderME nameFinder = new NameFinderME(new
TokenNameFinderModel(
OpenNLPMain.class.getResource("/opennlp-models/por-ner.bin”)));
for (String document[][] : documents) {
for (String[] sentence : document) {
Span nameSpans[] = nameFinder.find(sentence);
// do something with the names
}
nameFinder.clearAdaptiveData()
}
Name Finder API - Train a model
ObjectStream<String> lineStream =
new PlainTextByLineStream(new
FileInputStream("en-ner-person.train"), StandardCharsets.UTF8);
TokenNameFinderModel model;
try (ObjectStream<NameSample> sampleStream = new
NameSampleDataStream(lineStream)) {
model = NameFinderME.train("en", "person", sampleStream,
TrainingParameters.defaultParams(),
TokenNameFinderFactory nameFinderFactory);
}
model.serialize(modelFile);
Name Finder API - Evaluate a model
TokenNameFinderEvaluator evaluator = new TokenNameFinderEvaluator(new
NameFinderME(model));
evaluator.evaluate(sampleStream);
FMeasure result = evaluator.getFMeasure();
System.out.println(result.toString());
Name Finder API - Cross Evaluate a model
FileInputStream sampleDataIn = new FileInputStream("en-ner-person.train");
ObjectStream<NameSample> sampleStream = new
PlainTextByLineStream(sampleDataIn.getChannel(),
StandardCharsets.UTF_8);
TokenNameFinderCrossValidator evaluator = new
TokenNameFinderCrossValidator("en", 100, 5);
evaluator.evaluate(sampleStream, 10);
FMeasure result = evaluator.getFMeasure();
System.out.println(result.toString());
Language
Detector
Sentence
Detector
Tokenizer
POS
Tagger
Lemmatizer
Name
Finder
Chunker
Language 1
Language 2
Language N
Index
.
.
.
Apache Flink
Apache Flink
Mature project - 320+ contributors, > 11K commits
Very Active project on Github
Java/Scala
Streaming first
Fault-Tolerant
Scalable - to 1000s of nodes and more
High Throughput, Low Latency
Apache Flink - Pos Tagger and NER
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
Apache Flink - Pos Tagger and NER
private static class LanguageSelector implements OutputSelector<Tuple2<String, String>> {
public Iterable<String> select(Tuple2<String, String> s) {
List<String> list = new ArrayList<>();
list.add(languageDetectorME.predictLanguage(s.f1).getLang());
return list;
}
}
private static class PorTokenizerMapFunction implements MapFunction<Tuple2<String, String>,
Tuple2<String, String[]>> {
public Tuple2<String, String[]> map(Tuple2<String, String> s) {
return new Tuple2<>(s.f0, porTokenizer.tokenize(s.f0));
}
}
Apache Flink - Pos Tagger and NER
private static class PorPOSTaggerMapFunction implements MapFunction<Tuple2<String, String[]>,
POSSample> {
public POSSample map(Tuple2<String, String[]> s) {
String[] tags = porPosTagger.tag(s.f1);
return new POSSample(s.f0, s.f1, tags);
}
}
private static class PorNameFinderMapFunction implements MapFunction<Tuple2<String, String[]>,
NameSample> {
public NameSample map(Tuple2<String, String[]> s) {
Span[] names = engNameFinder.find(s.f1);
return new NameSample(s.f0, s.f1, names, null, true);
}
}
What’s Coming ??
What’s Coming ??
●DL4J: Mature Project: 114 contributors, ~8k commits
●Modular: Tensor library, reinforcement learning, ETL,..
●Focused on integrating with JVM ecosystem while
supporting state of the art like gpus on large clusters
●Implements most neural nets you’d need for language
●Named Entity Recognition using DL4J with LSTMs
●Language Detection using DL4J with LSTMs
●Possible: Translation using Bidirectional LSTMs with embeddings
●Computation graph architecture for more advanced use cases