Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V4 Wishlist #110

Open
3 tasks
ashvardanian opened this issue Mar 3, 2024 · 3 comments
Open
3 tasks

V4 Wishlist #110

ashvardanian opened this issue Mar 3, 2024 · 3 comments

Comments

@ashvardanian
Copy link
Owner

ashvardanian commented Mar 3, 2024

Features

  • Better hashing algorithms
  • Automata-based fuzzy searching algorithms

Breaking naming and organizational changes

  • Rename edit_distance to levenshtein_distance to match Hamming

Any other requests?

@happysalada
Copy link

happysalada commented Mar 16, 2024

Reading the readme, it doesn't mention about processing files that are compressed.
Of course, the file can be decompressed first in some other way, but it would be nice to have a way to process a compressed file without having to load it first in memory.
Let me be more specific, here is how you could process line by line with python

import gzip

with gzip.open('input.gz','rt') as f:
    for line in f:

but what if I'm going to ignore several lines anyways.
Having some form of efficient search through compressed files would be nice.
Thank you for making this project open source!

@0xqd
Copy link

0xqd commented Apr 1, 2024

hi there, I would love to know what is the current hashing algos? And on automata-based fuzzy searching, will it perform better than current string search algo on paper and design? Thanks!

@ashvardanian
Copy link
Owner Author

@happysalada, search through compressed data is an attractive feature proposition. I've been thinking about it a lot over the years, but it's not trivial for most compression types. Will keep in mind.

@0xqd, we currently implement Rabin-style hashing and fingerprinting documented here. The header file also provides some details:

/**
* @brief Computes the Karp-Rabin rolling hashes of a string supplying them to the provided `callback`.
* Can be used for similarity scores, search, ranking, etc.
*
* Rabin-Karp-like rolling hashes can have very high-level of collisions and depend
* on the choice of bases and the prime number. That's why, often two hashes from the same
* family are used with different bases.
*
* 1. Kernighan and Ritchie's function uses 31, a prime close to the size of English alphabet.
* 2. To be friendlier to byte-arrays and UTF8, we use 257 for the second function.
*
* Choosing the right ::window_length is task- and domain-dependant. For example, most English words are
* between 3 and 7 characters long, so a window of 4 bytes would be a good choice. For DNA sequences,
* the ::window_length might be a multiple of 3, as the codons are 3 (nucleotides) bytes long.
* With such minimalistic alphabets of just four characters (AGCT) longer windows might be needed.
* For protein sequences the alphabet is 20 characters long, so the window can be shorter, than for DNAs.
*
* @param text String to hash.
* @param length Number of bytes in the string.
* @param window_length Length of the rolling window in bytes.
* @param window_step Step of reported hashes. @b Must be power of two. Should be smaller than `window_length`.
* @param callback Function receiving the start & length of a substring, the hash, and the `callback_handle`.
* @param callback_handle Optional user-provided pointer to be passed to the `callback`.
* @see sz_hashes_fingerprint, sz_hashes_intersection
*/
SZ_DYNAMIC void sz_hashes(sz_cptr_t text, sz_size_t length, sz_size_t window_length, sz_size_t window_step, //
sz_hash_callback_t callback, void *callback_handle);

I am looking into alternative algorithms as well, but want the primary hash and the rolling hash to use the same schema.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants