You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There used to be an issue asking for a way to use embeddings instead of tokens as input to generate the response. Now llama.cpp has supported using embeddings as input, as shown below.
typedef struct llama_batch {
int32_t n_tokens;
llama_token * token;
float * embd; // set this member and keep token as null will take embeddings as input
llama_pos * pos;
int32_t * n_seq_id;
llama_seq_id ** seq_id;
int8_t * logits;
llama_pos all_pos_0; // used if pos == NULL
llama_pos all_pos_1; // used if pos == NULL
llama_seq_id all_seq_id; // used if seq_id == NULL
} llama_batch;
Since there's already a binding of this struct in LLamaSharp, what needs to be done is to add an API for executors to accept embeddings as input.
The text was updated successfully, but these errors were encountered:
I've been planning to look into this for a while, since it's required for the BatchedExecutor to support llava. My plan has been to create a new batch class (LlamaBatchEmbeddings). This would probably be a lot simpler than the existing batch (LlamaBatch).
That's great! I didn't notice that it's required for the BatchedExecutor to support llava so I set it as a good first issue. You could split the task into some sub-tasks and set easy and not urgent ones as good first issue, if you would like to. :)
There used to be an issue asking for a way to use embeddings instead of tokens as input to generate the response. Now llama.cpp has supported using embeddings as input, as shown below.
Since there's already a binding of this struct in LLamaSharp, what needs to be done is to add an API for executors to accept embeddings as input.
The text was updated successfully, but these errors were encountered: