You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation of the Qbeast Format on Spark does not support appending new data with a different indexed columns that the ones specified at the beginning.
Since data is constantly evolving, and business needs may change, it does makes sense that Qbeast index goes in the same direction. Also, it is designed in a way that it can support different index versions (called Revisions), and querying each version independently.
The to-do's of this issue are:
Remove constraints that checks wether the columnsToIndex are equal to the current indexed columns.
The current implementation of the Qbeast Format on Spark does not support appending new data with a different indexed columns that the ones specified at the beginning.
Since data is constantly evolving, and business needs may change, it does makes sense that Qbeast index goes in the same direction. Also, it is designed in a way that it can support different index versions (called
Revisions
), and querying each version independently.The to-do's of this issue are:
columnsToIndex
are equal to the current indexed columns.qbeast_hash
accordingly. When we remove totally the hash (as proposed in Overhead of qbeast_hash filtering when doing a Sample #68 ), this would no longer be needed.The text was updated successfully, but these errors were encountered: