Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local llm #17

Closed
wants to merge 2 commits into from
Closed

Local llm #17

wants to merge 2 commits into from

Conversation

itsPreto
Copy link

@itsPreto itsPreto commented Dec 4, 2023

Adding local llm support

@CLAassistant
Copy link

CLAassistant commented Dec 4, 2023

CLA assistant check
All committers have signed the CLA.

@SkalskiP
Copy link
Collaborator

SkalskiP commented Dec 4, 2023

Hi @itsPreto 👋🏻 Thanks for your interest in Maestro. Going local is what we want to do!

  • Which local LMM did you connect?
  • Could you share your example code showing the part where you use it with LMM?

@itsPreto
Copy link
Author

itsPreto commented Dec 4, 2023

I used what I'm familiar with which is Llama.cpp server with ShareGPT4V-7B. But it should work with any other backend/model.

@SkalskiP you just need to define your custom payload with the parameters you want and then call the prompt_image_local function with this payload and the localhost url.

You can see this in the examples/image.py:

# Custom payload function for local server
def custom_payload_func(image_base64, prompt, system_prompt):
    return {
        "prompt": f"{system_prompt}. USER:[img-12]{prompt}\nASSISTANT:",
        "image_data": [{"data": image_base64, "id": 12}],
        "n_predict": 256,
        "top_p": 0.5,
        "temp": 0.2
    }

# Convert image to base64 and send request to local server
response = prompt_image_local(marked_image, "Find the crowbar", "http://localhost:8080/completion", custom_payload_func)
print(response)

@itsPreto itsPreto closed this by deleting the head repository May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants