fibber.metrics.fluency.bert_perplexity_metric module¶
This metric computes the perplexity ratio ppl(paraphrase) / ppl(original text).
The perplexity is estimated using GPT2 model. This metric can reveal the meaningfulness of a sentence.
-
class
fibber.metrics.fluency.bert_perplexity_metric.
BertPerplexityMetric
(dataset_name, trainset, bert_ppl_gpu_id=- 1, bert_ppl_filter=- 1, **kwargs)[source]¶ Bases:
fibber.metrics.metric_base.MetricBase
This metric computes the perplexity of paraphrased text divided by the perplexity of original text. The perplexity is measured using BERT model.
Initialize Bert perplexity model.
-
property
lm_model
¶
-
property
tokenizer
¶
-
property