Part Of Speech Tagging

· November 20, 2009

Until now, all the posts here have looked at text in a purely statistical way. What the words actually were was less important than how common they were, and whether they occurred in a query or a category. There are plenty of applications, however, where a deeper parsing of the text could be huge beneficial, and the first step in such parsing is often part of speech tagging.

The tags in question are the grammatical parts of speech that the words fall into, the traditional noun, verb, adjective and so on that hopefully most people will dimly remember. Being able to tag a document appropriately is hugely helpful in trying to extract what document is discussing, and in determining other aspects of the text that are self-evident for a human reader, but tricky to determine statistically, particularly with a small number of examples.

The parts of speech are somewhat difficult to work out completely automatically, and even humans can get stuck with words that have many possible interpretations. Almost every system around utilises a corpus, a set of documents that have their words hand tagged (or hand verified) for parts of speech. This can then be used to extract statistics, and build taggers. Because there are many more parts of speech than may come to mind, there are various codes that are used to tag the files, a full list for the common Brown corpus is available on wikipedia. Some examples are NN for noun, NNS for plural noun, VB for verb, VBD for verb past tense, and a tagged string might look like this:

The/DT quick/JJ brown/JJ fox/NN
jumped/VBD over/IN the/DT lazy/JJ
dog/NN

For our implementation, we’ll look at a relatively simple to write tagger invented by Eric Brill in the early nineties. The tagger was trained by analysing a corpus and noting the frequencies of the different tags for a given word. As words were tagged they were assigned to the most frequent tag for the word if it was in the corpus, or tagged as a noun if not. Then a series of transformations were applied, which changed the tag depending on various conditions. The results were compared to known correct tags, and the rules that added the most accuracy retained.

Luckily for us, we can just use the most successful rules, and don’t have to reimplement the whole thing. The code here draws from the (many!) implementations of the Brill tagger by Mark Watson in various languages. The rules are pretty straightforward, such making a word a past participle if it ends with ‘ed’, or an adverb if it ends with ‘ly’.

<?php 
class PosTagger {
        private $dict; 
        
        public function __construct($lexicon) {
                $fh = fopen($lexicon, 'r');
                while($line = fgets($fh)) {
                        $tags = explode(' ', $line);
                        $this->dict[strtolower(array_shift($tags))] = $tags;
                }
                fclose($fh);
        }
        
        public function tag($text) {
                preg_match_all("/[\w\d\.]+/", $text, $matches);
                $nouns = array('NN', 'NNS');
                
                $return = array();
                $i = 0;
                foreach($matches[0] as $token) {
                        // default to a common noun
                        $return[$i] = array('token' => $token, 'tag' => 'NN');  
                        
                        // remove trailing full stops
                        if(substr($token, -1) == '.') {
                                $token = preg_replace('/\.+$/', '', $token);
                        }
                        
                        // get from dict if set
                        if(isset($this->dict[strtolower($token)])) {
                                $return[$i]['tag'] = $this->dict[strtolower($token)][0];
                        }       
                        
                        // Converts verbs after 'the' to nouns
                        if($i > 0) {
                                if($return[$i - 1]['tag'] == 'DT' && 
                                        in_array($return[$i]['tag'], 
                                                        array('VBD', 'VBP', 'VB'))) {
                                        $return[$i]['tag'] = 'NN';
                                }
                        }
                        
                        // Convert noun to number if . appears
                        if($return[$i]['tag'][0] == 'N' && strpos($token, '.') !== false) {
                                $return[$i]['tag'] = 'CD';
                        }
                        
                        // Convert noun to past particile if ends with 'ed'
                        if($return[$i]['tag'][0] == 'N' && substr($token, -2) == 'ed') {
                                $return[$i]['tag'] = 'VBN';
                        }
                        
                        // Anything that ends 'ly' is an adverb
                        if(substr($token, -2) == 'ly') {
                                $return[$i]['tag'] = 'RB';
                        }
                        
                        // Common noun to adjective if it ends with al
                        if(in_array($return[$i]['tag'], $nouns) 
                                                && substr($token, -2) == 'al') {
                                $return[$i]['tag'] = 'JJ';
                        }
                        
                        // Noun to verb if the word before is 'would'
                        if($i > 0) {
                                if($return[$i]['tag'] == 'NN' 
                                        && strtolower($return[$i-1]['token']) == 'would') {
                                        $return[$i]['tag'] = 'VB';
                                }
                        }
                        
                        // Convert noun to plural if it ends with an s
                        if($return[$i]['tag'] == 'NN' && substr($token, -1) == 's') {
                                $return[$i]['tag'] = 'NNS';
                        }
                        
                        // Convert common noun to gerund
                        if(in_array($return[$i]['tag'], $nouns) 
                                        && substr($token, -3) == 'ing') {
                                $return[$i]['tag'] = 'VBG';
                        }
                        
                        // If we get noun noun, and the second can be a verb, convert to verb
                        if($i > 0) {
                                if(in_array($return[$i]['tag'], $nouns) 
                                                && in_array($return[$i-1]['tag'], $nouns) 
                                                && isset($this->dict[strtolower($token)])) {
                                        if(in_array('VBN', $this->dict[strtolower($token)])) {
                                                $return[$i]['tag'] = 'VBN';
                                        } else if(in_array('VBZ', 
                                                        $this->dict[strtolower($token)])) {
                                                $return[$i]['tag'] = 'VBZ';
                                        }
                                }
                        }
                        
                        $i++;
                }
                
                return $return;
        }
}
?>

The lexicon for the class is available, or could be extracted with some work from the Brown corpus itself. There are bigger corpora available, which could give better results, but at the cost of more processing, and more overhead.

<?php
// little helper function to print the results
function printTag($tags) {
        foreach($tags as $t) {
                echo $t['token'] . "/" . $t['tag'] .  " ";
        }
        echo "\n";
}

$tagger = new PosTagger('lexicon.txt');
$tags = $tagger->tag('The quick brown fox jumped over the lazy dog');
printTag($tags);
?>

While with the quick brown fox example we got perfect tagging (see the example up above), but for a tougher test we can try this with the grammatical powerhouse that is twitter. While we might not get perfect results, hopefully we should get something in the ballpark, and to keep it interesting we can take a look at the nouns that are tagged to see how they fit the message. Thanks to Sam, Helgi and Johanna for their tweets.

<?php
// @samsoir
$tags = $tagger->tag("Coffee... yes I've said it already today, but it really does keep ones mind fresh and aler [zzzzzzzzZZZZZZZ]");
printTag($tags);

// @h
$tags = $tagger->tag("How can I make twitter not think that @h&m is not a mention to / about me! Gah. I have had enough of these Jimmy Choo and wtf ever things.");
printTag($tags);

// @johannacherry
$tags = $tagger->tag("i think my brain has checked out for the day..i've been playing with my hair and thinking about toothpaste for about 10 minutes now...");
printTag($tags);
?>

Output:

Coffee.../NN
yes/UH I/NN ve/NN said/VBD it/PRP
already/RB
today/NN but/CC it/PRP
really/RB does/VBZ
keep/VB ones/NNS
mind/NN fresh/JJ and/CC aler/NN zzzzzzzzZZZZZZZ/NN 

Noun wise, this has picked up Coffee, Today, Ones, Mind and zzzzzzz, which does sum up the message pretty nicely. Notice that the typo of ‘alert’ is mistagged as is “I’ve”, suffering from the simplicity of the tokeniser.

How/WRB
can/MD I/NN make/VB twitter/NN not/RB
think/VBP that/IN h/NN
m/NN is/VBZ not/RB
a/DT mention/NN to/TO about/IN me/PRP Gah./NN 
I/NN have/VBP had/VBD enough/RB of/IN these/DT
Jimmy/NNP Choo/NN and/CC 
wtf/NN ever/RB things./NNS

Again on the nouns we have: I, twitter, h, m, mention, Gah, I, Jimmy, Choo, wtf, things. Again an extension to the tokeniser could help here, and an addition to the lexicon to get wtf marked as UH (an interjection or exclamation).

i/NN think/VBP my/PRP$ brain/NN
has/VBZ checked/VBN out/IN for/IN the/DT day..
i/CD ve/NN been/VBN playing/VBG with/IN my/PRP$ hair/NN
and/CC thinking/VBG about/IN toothpaste/NN
for/IN about/IN 10/NN minutes/NNS
now.../RB 

Again we can see some tokenisation driven errors, but brain, hair, 10 and minutes pop out, which isn’t too bad.

There are taggers that work quite differently, for example extracting language models (Hidden Markov Models) that model the probabilities extracted from the corpus, but given the amount of code for the results, I think the Brill tagger is a pretty nice option! There’s much that could be done to tidy up this one, but particularly for long texts there is enough data to do some useful entity extraction and further processing.