AssertionError: We don‘t support load_inference_model in imperative mode

加载推理的模型 文件出错:

You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.Traceback (most recent call last):
  File "test_deploy.py", line 4, in <module>
    program, feed_vars, fetch_vars = fluid.io.load_inference_model('/opt/ugatit_paddle/UGATIT-paddle/save_infer_model',exe)
  File "<decorator-gen-74>", line 2, in load_inference_model
  File "/opt/AI/AN3.5.2/lib/python3.6/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/opt/AI/AN3.5.2/lib/python3.6/site-packages/paddle/fluid/framework.py", line 214, in __impl__
    ), "We don't support %s in imperative mode" % func.__name__
AssertionError: We don't support load_inference_model in imperative mode

原因是2.0默认的是动态图

解决方式:

paddle.enable_static()

猜你喜欢

转载自blog.csdn.net/zhou_438/article/details/109673752
we