-
Notifications
You must be signed in to change notification settings - Fork 5.7k
使用c# paddle_inference部署静态图模型推理报错 #72369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
你好!C#不是paddle_inference官方维护的语言,我无法根据报错信息定位报错原因,但是推测可能不是语言或框架的问题,更像是模型本身配置有误或参数有误。你可以提供更多有关 |
您好,采用c++部署使用vs2019生成文件后,在out/release目录下调用main文件进行推理出现报错: E:\PaddlePAddle\out\Release>.\main --model_dir=E:\PaddlePAddle\rcnn\inference_model\output\config --image_file=E:/PaddlePAddle/images/0.jpg
[libprotobuf ERROR C:\cache_release\third_party\cuda116\3ad0da47b86006a7bf40b5685bc39f90\protobuf\src\extern_protobuf\src\google\protobuf\message_lite.cc:121] Can't parse message of type "paddle.framework.proto.ProgramDesc" because it is missing required fields: blocks[13].ops[71].attrs[13].type
[libprotobuf ERROR C:\cache_release\third_party\cuda116\3ad0da47b86006a7bf40b5685bc39f90\protobuf\src\extern_protobuf\src\google\protobuf\message_lite.cc:121] Can't parse message of type "paddle.framework.proto.ProgramDesc" because it is missing required fields: blocks[13].ops[71].attrs[13].type
--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
Not support stack backtrace yet.
----------------------
Error Message Summary:
----------------------
InvalidArgumentError: Failed to parse program_desc from binary string.
[Hint: Expected desc_.ParseFromString(binary_str) == true, but received desc_.ParseFromString(binary_str):0 != true:1.] (at ..\paddle\fluid\framework\program_desc.cc:103) 模型采用的是 https://aistudio.baidu.com/datasetdetail/240620 的静态图模型。 |
推理同学根据报错信息认为是模型没有加载成功,事实上我根据 cd PaddleDetection
# 将output.zip解压到当前目录
# 将wsi_coco.zip解压到dataset目录
python deploy/python/infer.py --model_dir=./output/config --device=GPU --image_file=./dataset/wsi_coco/train/0.jpg 但我没有用C++或者C#试过,也没有用Windows运行过,不排除是教材代码有误或运行环境、运行参数等原因导致模型加载失败(照理来说Python能跑说明模型是没问题的,然后paddle inference这个框架很多人用也不太可能出这种问题,那就只有可能是调用的代码出问题了),建议你可以先用Python代替,或者提供更多关于模型版本、导出方式和paddle inference版本的信息以供我们排查 |
是的,这个模型在windows上使用python可以正常推理运行,只是c++运行跑不起来。
|
我今天又拿Linux复现了一下,貌似CPU版的C++是可以跑通的,就是编译过程中有些坑
大致按照
编译: $ cd cpp
$ bash scripts/build.sh 运行: $ build/main --model_dir=../output/config --image_file=../wsi_coco/val/1017.jpg
total images = 1, batch_size = 1, total steps = 1
class=0 confidence=0.6409 rect=[732 92 980 405]
class=1 confidence=0.7778 rect=[0 1614 205 1884]
../wsi_coco/train/1017.jpg The number of detected box: 2
Visualized output saved as output/1017.jpg GPU版的环境依赖太复杂我就没复现,你可以再检查下有什么问题,这个教材的代码本来也不太规范,应该并非paddle_inference的问题 |
您好,请问“ |
改成这样:cpp/src/main.cc |
请提出你的问题 Please ask your question
使用c# paddle_inference部署静态图模型推理报错
使用.\main --model_dir=E:\PaddlePAddle\rcnn\inference_model\output\config --image_file=E:/PaddlePAddle/images/0.jpg
调用main文件时
The text was updated successfully, but these errors were encountered: