fibber.metrics.fluency.gpt2_perplexity_metric module

This metric computes the perplexity ratio ppl(paraphrase) / ppl(original text).

The perplexity is estimated using GPT2 model. This metric can reveal the meaningfulness of a sentence.

class fibber.metrics.fluency.gpt2_perplexity_metric.GPT2PerplexityMetric(gpt2_pretrained_model='gpt2-medium', gpt2_gpu_id=- 1, **kwargs)[source]

Bases: fibber.metrics.metric_base.MetricBase

This metric computes the perplexity of paraphrased text divided by the perplexity of original text. The perplexity is measured using GPT2 model.

Initialize GPT2 model.

fibber.metrics.fluency.gpt2_perplexity_metric.make_batch(toks_list)[source]

Convert multiple text to a batch tensor.

fibber.metrics.fluency.gpt2_perplexity_metric.make_input_output_pair(tokenizer, x)[source]

Tokenize the text, then construct input and output for GPT2.