Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LabelModel should support loading sparse matrices #1625

Open
talolard opened this issue Feb 7, 2021 · 3 comments
Open

LabelModel should support loading sparse matrices #1625

talolard opened this issue Feb 7, 2021 · 3 comments
Assignees
Labels
feature request help wanted no-stale Auto-stale bot skips this issue

Comments

@talolard
Copy link

talolard commented Feb 7, 2021

Problem I Want To solve

I've found it easy to generate millions of labels with label functions, but loading them into Snorkel is hard.
The problem is the conversion to augmented format and (for training) the calculation of the O matrix.

Describe the solution you'd like

In addition to letting the user load the full label matrix (n_docs,n_funcs), we can let the user load the indicator matrix (n_docs,n_funcsn_labels) in sparse format.
e.g. user would input a list of tuples (doc_id,func_id
num_labels+label_id) and populate a sparse matrix.
This makes the L.T@L calculation cheap, and saves lots of time and memory building indicator matrix.

Torch supports Sparse matrices, so we could even do training and inference without the memory hassle of the dense L matrix.

Example:

I calculate and store the label functions in SQL, so it's easy to generate that list of tuples.
image

Caveat

This would make modelling dependencies between LFs harder, but since _create_tree is degenerate that doesn't seem to be an issue in practice.

Describe alternatives you've considered

The other alternative is some "big-data" solution, but that's a lot of friction for something I can do so simply.

Additional context

I'm implementing this anyway for my own fun, happy to contribute it back if theirs interest

@bhancock8
Copy link
Member

Thanks for suggesting this, @talolard! Updating this operation to accept sparse matrix inputs is something we've had in our backlog, so a PR here is certainly welcome.

@talolard
Copy link
Author

talolard commented Feb 9, 2021

Awesome.
API design question:
The way things work now, there are calls to set_constants in the fit and predict methods.
That doesn't work cleanly with a sparse format.

My personal taste is to add new methods like

def fit_from_sparse_tuples(self,indicies_tuple):
   ...

def fit_from_pre_computed_objective_matrix(self, pre_computed_O):
   ...

And these would call fit, passing a paramater that would go to set_constants .

Does that sound OK ?

@bhancock8
Copy link
Member

Yeah, that sounds good to me—separate explicit methods rather than overloading the args for the default fit method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request help wanted no-stale Auto-stale bot skips this issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants