Chinese to English model

Helsinki-NLP/opus-mt-zh-en · Hugging Face We're on a journey to advance and democratize artificial intelligence through open source and open science. https://huggingface.co/Helsinki-NLP/opus-mt-zh -en?text=%E6%88%91%E5%8F%AB%E6%B2%83%E5%B0%94%E5%A4%AB%E5%86%88%EF%BC%8C%E6% 88%91%E4%BD%8F%E5%9C%A8%E6%9F%8F%E6%9E%97%E3%80%82 NLP (41) An attempt to use the HuggingFace translation model_huggingface translation_ Shanyin Youth's Blog-CSDN Blog   This article will show how to use the translation model in HuggingFace. HuggingFace is a well-known group in the NLP field. It has done a lot of work on pre-training models, and has open sourced many pre-training models and models that have been trained for a specific NLP character and can be used directly. This article will use the ready-to-use translation model provided by HuggingFace. The translation model of HuggingFace can refer to the website: https://huggingface.co/models?pipeline_tag=translation, most of the models in this part are developed by Helsinki-NLP (Lan https://blog.csdn.net/jclian91/article /details/114647084

# -*- coding: utf-8 -*-
import sys
sys.path.append("/home/sniss/local_disk/stable_diffusion_api/")

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("/home/sniss/local_disk/stable_diffusion_api/models/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("/home/sniss/local_disk/stable_diffusion_api/models/opus-mt-zh-en")

def translation_zh_en(text):
    # Tokenize the text
    batch = tokenizer.prepare_seq2seq_batch(src_texts=[text], return_tensors='pt', max_length=512)
#     batch = tokenizer.prepare_seq2seq_batch(src_texts=[text])

    # Make sure that the tokenized text does not exceed the maximum
    # allowed size of 512
#     import pdb;pdb.set_trace()
#     batch["input_ids"] = batch["input_ids"][:, :512]
#     batch["attention_mask"] = batch["attention_mask"][:, :512]

    # Perform the translation and decode the output
    translation = model.generate(**batch)
    result = tokenizer.batch_decode(translation, skip_special_tokens=True)
    return result

if __name__ == "__main__":
    text = "从时间上看,中国空间站的建造比国际空间站晚20多年。"
    result = translation_zh_en(text)
    print(result)

Guess you like

Origin blog.csdn.net/u012193416/article/details/130315292