Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pp-shituv2 使用paddleserving部署后启动服务运行python3.7 pipeline_http_client.py报/home/aistudio/PaddleClas/deploy/paddleserving/recognition {'err_no': 8, 'err_msg': "(data_id=0 log_id=0) [det|0] Failed to postprocess: 'scale_factor.lod'", 'key': [], 'value': [], 'tensors': []} #1989

Open
sloyqi opened this issue Mar 18, 2024 · 3 comments

Comments

@sloyqi
Copy link

sloyqi commented Mar 18, 2024

paddleServing部署时,启动http客户端,报错

/home/aistudio/PaddleClas/deploy/paddleserving/recognition {'err_no': 8, 'err_msg': "(data_id=0 log_id=0) [det|0] Failed to postprocess: 'scale_factor.lod'", 'key': [], 'value': [], 'tensors': []}

识别推理模型serving_server_conf.prototxt文件是:
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "scale_factor"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}

通用检测模型.prototxt文件是:
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 416
shape: 416
}
feed_var {
name: "scale_factor"
alias_name: "scale_factor"
is_lod_tensor: false
feed_type: 1
shape: 2
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: false
fetch_type: 2
}

@sloyqi
Copy link
Author

sloyqi commented Mar 18, 2024

config.yml
#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG
##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num
worker_num: 1

#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port
http_port: 18080
rpc_port: 9993

dag:
#op资源类型, True, 为线程模型;False,为进程模型
is_thread_op: False
op:
imagenet:
#并发数,is_thread_op=True时,为线程并发;否则为进程并发
concurrency: 1

    #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置
    local_service_conf:

        #uci模型路径
        model_config: ResNet50_vd_serving

        #计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu
        device_type: 1

        #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
        devices: "0" # "0,1"

        #client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测
        client_type: local_predictor

        #Fetch结果列表,以client_config中fetch_var的alias_name为准
        fetch_list: ["prediction"]

Copy link

Message that will be displayed on users' first issue

@sloyqi
Copy link
Author

sloyqi commented Mar 19, 2024

pipeline.log中报错显示
Traceback (most recent call last):
File "/home/aistudio/.data/webide/pip/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1105, in _run_postprocess
logid_dict.get(data_id))
File "recognition_web_service.py", line 94, in postprocess
boxes = self.img_postprocess(fetch_dict, visualize=False)
File "/home/aistudio/.data/webide/pip/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 427, in call
self.clsid2catid)
File "/home/aistudio/.data/webide/pip/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 344, in _get_bbox_result
lod = [fetch_map[fetch_name + '.lod']]
KeyError: 'scale_factor.lod'
ERROR 2024-03-18 19:23:33,028 [dag.py:410] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [det|0] Failed to postprocess: 'scale_factor.lod'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant