ngram_test Test n-gram language model

Table of Contents
Synopsis
OPTIONS
Hints

Synopsis

ngram_test [input file0] [input file1] ... [-w ifile] [-S ifile] [-raw_stats ] [-brief ] [-f ] [-input_format string] [-prev_tag string] [-prev_prev_tag string] [-last_tag string] [-default_tags ]

ngram_test is for testing ngram models generated from ngram_build.

How do we test an ngram model ?

ngram_test will compute the entropy (or perplexity, see below) of some test data, given an ngram model. The entropy gives a measure of how likely the ngram model is to have generated the test data. Entropy is defined (for a sliding-window type ngram) as: \[H = -\frac{1}{Q} \sum_{i=1}^{Q} log P(w_i | w_{i-1}, w_{i-2},... w_{i-N+1}) \] where \(Q\) is the number of words of test data and \(N\) is the order of the ngram model. Perplexity is a more intuitive mease, defined as: \[B = 2^H \] The perplexity of an ngram model with vocabulary size V will be between 1 and V. Low perplexity indicates a more predictable language, and in speech recognition, a models with low perplexity on test data (i.e. data NOT used to estimate the model in the first place) typically give better accuracy recognition than models with higher perplexity (this is not guaranteed, however). test_ngram works with non-sliding-window type models when the input format is ngram_per_line.

Input data format

The data input format options are the same as ngram_build, as is the treatment of sentence start/end using special tags.

Note: To get meaningful entropy/perplexity figures, it is recommended that you use the same data input format in both ngram_build and ngram_test, and the treatment of sentence start/end should be the same.

See also

  • ngram_build

OPTIONS

-g

ifile grammar file (required)

-w

ifile filename containing word list (required for some grammar formats)

-S

ifile script file

-raw_stats

print unnormalised entropy and sample count

-brief

print results in brief format

-f

print stats for each file

-input_format

string format of input data (default sentence_per_line) may also be sentence_per_file, or ngram_per_line. Pseudo-words :

-prev_tag

string tag before sentence start

-prev_prev_tag

string all words before 'prev_tag'

-last_tag

string after sentence end

-default_tags

use default tags of !ENTER,!EXIT and !EXIT respectively

Hints

I got a perplexity of Infinity - what went wrong ?

A perplexity of Infinity means that at least one of the ngrams in your test data had a probility of zero. Possible reasons for this include: