paddle模型应用

1.首先要保存模型

fluid.io.save_inference_model('../bot15.model', ["vec"], [pred1,pred2,pred3], exe)

第二个参数预测样本集的数据feed,

第三个参数是预测数据

第四个参数是当前运行程序

2.加载样本

fluid.io.load_inference_model('../bot15.model', exe)

这里的exe可以是一个新的exe

具体代码如下:

from __future__ import print_function
import paddle
import paddle.fluid as fluid
import numpy as np
import sys
import math

EMB_DIM = 81     #词向量的维度
HID_DIM = 512     #隐藏层的维度
STACKED_NUM = 3   #LSTM双向栈的层数
BATCH_SIZE = 128  #batch的大小

result_vec = np.load('../result_vec2.npy',allow_pickle=True).tolist()

use_cuda = False
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()

exe = fluid.Executor(place)
inference_scope = fluid.core.Scope()

d = []
for i in result_vec:
    k = [[o] for o in i[1]]
    d.append(k)

lod = np.array(k,dtype='float32')

def pred(o):
    [inferencer, feed_target_names,
     fetch_targets] = fluid.io.load_inference_model('../bot15.model', exe)

    results = exe.run(inferencer,feed={'vec': fluid.create_lod_tensor(o,[[81]],place)},
                      fetch_list=fetch_targets,return_numpy=False)
    print(np.array(results[0]).tolist()[0][0],',',
          np.array(results[1]).tolist()[0][0],',',
          np.array(results[2]).tolist()[0][0])

猜你喜欢

转载自www.cnblogs.com/yangyang12138/p/12700522.html