You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need a way to experiment with different chunking + ingestion strategies. For example, we have some "raw" documents we want to ingest into a vector database, and there are different ways of transforming those "raw" documents into the documents we end up vectorizing. For example, we can ingest them as is, "chunk" them into 10-line chunks, or do other pre-processing to extract keywords and relevant phrases.
Motivation, pitch
Talking to some customers about their needs regarding vector DB evaluation at scale.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
馃殌 The feature
We need a way to experiment with different chunking + ingestion strategies. For example, we have some "raw" documents we want to ingest into a vector database, and there are different ways of transforming those "raw" documents into the documents we end up vectorizing. For example, we can ingest them as is, "chunk" them into 10-line chunks, or do other pre-processing to extract keywords and relevant phrases.
Motivation, pitch
Talking to some customers about their needs regarding vector DB evaluation at scale.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: