Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep Learning support in Etaler #91

Open
marty1885 opened this issue Nov 6, 2019 · 3 comments
Open

Deep Learning support in Etaler #91

marty1885 opened this issue Nov 6, 2019 · 3 comments
Labels
enhancement New feature or request feature request New features help wanted Extra attention is needed need discussion need some discussion

Comments

@marty1885
Copy link
Member

marty1885 commented Nov 6, 2019

Related to htm-community/htm.core#680

Well, here we go

Why

The SDRClassifer in NuPIC/HTM.core ins infect a simple 2 layer MLP but Etaler implements it as a CLAClassifer/KNN. CLAClassifer is deprecated from HTM.core and is inferior to the MLP; both performance and accuracy wise. It is beneficial to have an actual SDRClassifer implementation. Also, as community member @Thanh-Binh mentioned. Better neural network architecture can help HTM generate better predictions.

Why not

This is a very steep slippery slope. Etaler's core design is very similar to a DL framework. We have tensors, operators, etc... while autograd and lazy evaluation can be implemented easily by extending the current system. As far as I can tell, if implemented, Etaler will be the only framework supporting DL via OpenCL with a proper tensor system. We might gain traction, but from the DL community and thus most development will be focused on DL instead of HTM.

How

Writing matrix operations from scratch is doable, but we'll never even beat NumPy at performance, and it'll take forever. We might want to use libraries like NNPACK and clDNN to perform the calculations. Besides that, we need to modify the current Tensor system to support autograd and (hopefully) operation fusing. We also need to rethink how the et namespace is used. How should we separate HTM and NN algorithms and how can we use them together.

@marty1885 marty1885 added enhancement New feature or request help wanted Extra attention is needed need discussion need some discussion feature request New features labels Nov 6, 2019
@marty1885
Copy link
Member Author

This is possible if could get #59 working, which will give us the ability to do backprop. But I'm still working on a general method (to do backprop, feedback alignment and HTM together)

@HMaker
Copy link

HMaker commented Jun 11, 2021

@marty1885 Hey, great work! Will it possible to do supervised learning?

@marty1885
Copy link
Member Author

I don't know about supervised learning. But there's a PoC RL that sorta works. Need more time for improvements though.
https://discourse.numenta.org/t/working-reinforcement-learning-in-htm-through-unsupervised-behavioral-learning/8486

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request New features help wanted Extra attention is needed need discussion need some discussion
Projects
None yet
Development

No branches or pull requests

2 participants