对于文本生成类4种评价指标的的计算BLEU METEOR ROUGE CIDEr

github下载链接:https://github.com/Maluuba/nlg-eval

将下载的文件放到工程目录,而后使用如下代码计算结果

具体的写作格式如下:

from nlgeval import NLGEval
nlgeval=NLGEval()
#对应的模型生成的句子有三句话,每句话的的标准有两句话
hyp=['this is the model generated sentence1 which seems good enough','this is sentence2 which has been generated by your model','this is sentence3 which has been generated by your model']
ref1=['this is one reference sentence for sentence1','this is a reference sentence for sentence2 which was generated by your model','this is a reference sentence for sentence3 which was generated by your model']
ref2=['this is one more reference sentence for sentence1','this is the second reference sentence for sentence2','this is a reference sentence for sentence3 which was generated by your model']
lis=[ref1,ref2]
ans=nlgeval.compute_metrics(hyp_list=hyp,ref_list=lis)
# res=compute_metrics(hypothesis='nlg-eval-master/examples/hyp.txt',
#                    references=['nlg-eval-master/examples/ref1.txt','nlg-eval-master/examples/ref2.txt'])
print(ans)

输出结果如下:

{'Bleu_2': 0.5079613089004589, 'Bleu_3': 0.35035098185199764, 'Bleu_1': 0.6333333333122222, 'Bleu_4': 0.25297649984340986, 'ROUGE_L': 0.5746244363308142, 'CIDEr': 1.496565428735557, 'METEOR': 0.3311277692098822}

猜你喜欢

转载自www.cnblogs.com/AntonioSu/p/12041325.html