-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phi-2 Inference has some issues. #380
Comments
Turns out this issue is with every other model that I use except for gemma. |
@areebbashir Hello, recently I have also been trying to run the phi-2 model based on an Android device. However, I encountered an error while converting the model to a compatible mediapipe format, such as not being able to find the file where model_ckpt_util is located. My Python version is 3.9, and mediapipe versions are 0.10.11 and 0.10.13, both of which cannot run the following script properly. Can you point out the problem? Thank you very much! import mediapipe as mp Running error: |
I have not encountered this issue as of yet. But I had some other import errors when in Pip installed the mediapipe. Then I used Also you are putting the wrong path in input_ckpt. It need to be absolute path to the folder containing the models of phi. Try thus |
I have a feeling the 'stop' token is potentially different for the non-Gemma models, and would need to be updated in the app, but I'll need to verify that. I'll add this to my TODO list for after IO! |
Hey @vittalitty if you're still running into this issue after the feedback in the previous comment, can you put it into a new issue for tracking? Thanks! |
Thanks, In the meanwhile can you suggest what I could try from my end. |
You'll want to look up the info on that model on Hugging Face or wherever to find out what the EOD command should be, then replace it in the app (assuming that's the issue) |
When I'm running phi-2 on device there is a issue while generating the responses. When I feed it a question it starts generating the response(although quite slow) but it just doesn't stop.
In ChatViewModel inside sendMessage Api, inferenceModel.partialResults always sends the done as false so the model keeps questioning itself and answering the question itself only.
The text was updated successfully, but these errors were encountered: