-
Hi all,
But when I call the wasm-lib from the host by using
I got the error
So I am wondering how to make wasmedge_bindgen work in this case and how can the host function get the inference result from the wasmedge VM? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
@KeeganJin If you'd like to use |
Beta Was this translation helpful? Give feedback.
-
From the error message, you didn't register the wasm module, which holds the use wasmedge_sdk::Module;
// load wasm module that holds the `pytorch_infer` func
let module = Module::from_file(None, "/path/to/your/some.wasm")?;
// register the module as an active module into the vm
let vm = vm.register_module(None, module)?;
vm.wasi_module_mut()
.expect("Not found wasi module")
.initialize(
Some(vec![wasm_file, model_bin, image_file]),
None,
Some(vec![dir_mapping]),
);
let vm = VmDock::new(vm);
vm.run_func("pytorch_infer", params!())?; In addition, I drafted a new example for the |
Beta Was this translation helpful? Give feedback.
-
As your code shows, you register Fix: replace ...
#[cfg(all(target_os = "linux", target_arch = "x86_64"))]
fn infer() -> Result<(), Box<dyn std::error::Error>> {
...
// load wasm module from file
let module = Module::from_file(Some(&config), wasm_file)?;
// build a Vm
let mut vm = VmBuilder::new()
.with_config(config)
.with_plugin_wasi_nn()
.build()?
.register_module(Some("extern"), module)?; <=========== register a named module, called "extern"
...
let vm = VmDock::new(vm);
// call the call_infer in ml_pytorch_lib.wasm
vm.run_func("call_infer", params!())?; <============ call "call_infer" from an active module that doesn't exist
// vm.run_func(Some("extern"), "_start", params!())?;
Ok(())
} |
Beta Was this translation helpful? Give feedback.
As your code shows, you register
module
as a named module. It is named "extern". However, you invoke thecall_infer
from an active module that does not exist.Fix: replace
.register_module(Some("extern"), module)?;
with.register_module(None, module)?;
, then try again.