Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export all iterations to be able to create animations #33

Open
dkobak opened this issue Sep 4, 2018 · 7 comments
Open

Export all iterations to be able to create animations #33

dkobak opened this issue Sep 4, 2018 · 7 comments

Comments

@dkobak
Copy link
Collaborator

dkobak commented Sep 4, 2018

When preparing a public talk, I often want to include an animation of how t-SNE develops during gradient descent. I thought I can call fast_tsne in a loop with max_iter=1 (loading the similarities from a file), but this yields a worse final result than running fast_tsne in one go; I assume this is because of the adaptive gradient descent. So it would be great to be able to run fast_tsne and export all max_iter iterations. What do you think?

@linqiaozhi
Copy link
Member

I think it's a great idea...maybe can add it as an option?

Also, in addition to the adaptive gradient descent, another reason why calling it with max_iter=1 in a loop is because of the early exaggeration.

@dkobak
Copy link
Collaborator Author

dkobak commented Sep 8, 2018

I was taking care to specify exaggeration correctly, so I think the only difference must have been due to adaptive gradient descent...

Anyway, optional output sounds right. I noticed that you have recently added another optional output so we should take care that these two optional outputs work correctly together. Or maybe combine them together? If some input flag is on, then optional outputs (all of them) are returned and if not then not? Or do you prefer to have a separate flag for each optional output?

@linqiaozhi
Copy link
Member

I think you're referring to the R wrapper, correct? If so, then yes, it now optionally returns the KL divergence computed at every 50 iterations. I would have preferred that the output always be a list (i.e. so that the cost could also be outputted), but since people have already started using the old interface, I was hesitant to change the default and break peoples' code.

Anyways, I think it would be most intuitive to have a separate flag. That is, a 'get_costs' flag as we currently have, and (for example) an 'intermediate_iterations' flag. If these are both false, then the embedded matrix Y is returned (by default), if either of them are true, then a list is returned.

Do you think that is a good solution?

@dkobak
Copy link
Collaborator Author

dkobak commented Sep 9, 2018

OK.

Currently the C++ code always saves the costs, but I'm not sure it's a good idea with gradient descent history: e.g. for 1mln points, the output would be 1mln x 2 x 1000, which is pretty large...

I was originally thinking of animating smaller datasets :)

@pavlin-policar
Copy link
Contributor

@dkobak if this is still relevant, I have released a python only version of FIt-SNE, which was built with interactivity in mind (since we are integrating it into Orange). I've included a callback system where you can look at embeddings at each step of the optimization and animate this. I played around with this in the Orange widgets and the animations look really neat - but that hasn't been merged into master yet. If you need a quick fix, you can use it here.

I definitely think this would be a great addition to have here, but I am not that familiar with C++ and I don't know how a callback system would work.

@dkobak
Copy link
Collaborator Author

dkobak commented Sep 15, 2018

@pavlin-policar I don't think there is a way to set up a callback system. My current thinking is that we should pass a boolean flag save_intermediate_iterations into the C++ code. If it's set to True, then the code should write each iteration into a special file, e.g. intermediate_iterations.txt (without storing all of them in RAM). Then one can read this file from Python/R/Matlab and make an animation.

For 25k points, the final output is 25k*2*8/1e+6 = 0.4 Mb, so 1000 iterations would be 400 Mb, which is very manageable. For 1mln points, the final output is 16 Mb, so 1000 iterations would be 16 Gb. That's a large file, but can still be easily processed if needed.

@linqiaozhi Thoughts?

I could try to implement it some time in the next few weeks.

@pavlin-policar Wow, thanks a lot for the link to your fastTSNE package. Great work! I might leave some comments over there.

@linqiaozhi
Copy link
Member

@pavlin-policar A callback system for visualizing real-time FIt-SNE is super cool...I wish that would be possible with our C++ code, but I can't imagine that working since the wrappers are just calling a binary right now.

@dkobak I think your approach is very reasonable. I would only suggest that we actually output floats instead of doubles, so each element would typically be 4 bytes instead of 8 bytes, which would halve the file size (we don't need that much precision for the visualization anyways). There are other more sophisticated things that could be done (e.g. only output a random subset of the points, or only specific iterations), but I don't think it's necessary at this point...this is a function that people will use only for very specific situations (e.g. diagnostics) so using some disk space and taking some time for I/O is probably okay. At least for a first implementation. Thanks for being willing to implement it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants