Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpatialPooler and TemporalMemory API change. #59

Open
marty1885 opened this issue Aug 5, 2019 · 8 comments
Open

SpatialPooler and TemporalMemory API change. #59

marty1885 opened this issue Aug 5, 2019 · 8 comments
Labels
enhancement New feature or request need discussion need some discussion Spatial Pooling Regarding Spatial Pooling Temporal Memory Regarding Temporal Memory

Comments

@marty1885
Copy link
Member

(This issue is also a not for myself.) As I mentioned in #45 . I'm preparing an API update.

What's the problem

The current API loosely follows the one in NuPIC. Where (in NuPIC) if you want to switch between global inhibition and local inhibition. You call sp.setGlobalInhibition(bool). And set the parameters via setGlobalDensity/setLocalXXX. Which is not elegant and stores a lot of state variables inside the layer. (You need to store variables even you don't need them).

This sort of APIs also is very not flexible. Ex, no global inhibition on a TM, can't extract the confident of TM predictions, etc... There is also the problem that layers are objects, there is no chance for a functional one.

Purposed solution

My proposed solution is to have a PyTorch like API. Where the actual operation is implemented in the et::F namespace and the layer object in the et::htm namespace.

For example

SpatialPooler sp(in_shape, out_shape);
sp.setGloablDensity(0.1); //Enable global inhibition and set density to 0.1
Tensor y = sp.compute(x);

becomes

htm::SpatialPooler sp(in_shape, out_shape);
Tensor y = F::globalInhibition(sp.compute(x), 0);

This have the advantage that we can switch between local/global inhibition easily.

In NuPIC-like API.

bool use_local = true;

if(use_local) {
    sp.setGlobalInhibition(false);
    set.setLocalAreaDensity(0.10);
}
else {
    sp.setGlobalInhibition(true);
    sp.setGlobalDensity(0.1);
}

auto out = sp.compute(x);

becomes

bool use_local = true;

Tensor y = [&](){
    if(use_local) return F::localInhibition(sp.compute(x), 0.1);
    else return F::globalInhibition(sp.compute(x), 0.1);
}();
@marty1885 marty1885 added enhancement New feature or request need discussion need some discussion labels Aug 5, 2019
@marty1885 marty1885 added Spatial Pooling Regarding Spatial Pooling Temporal Memory Regarding Temporal Memory labels Aug 13, 2019
@marty1885 marty1885 pinned this issue Dec 14, 2019
@marty1885
Copy link
Member Author

The next release will not be compatible with v0.1.4. I'm starting to work on this.

@marty1885
Copy link
Member Author

Design doc

// The main namespace to store everything
namespace et {
namespace F; // functional APIs. i.e. They are state-less
namespace htm; // HTM layers, they may be state-ful
namespace encoder; // encoders
namespace decoder; // decoder, if possible
}

This also begs the question. Should we keep the encoders stateless like they are now? Or NuPIC's stataful encoders are a better design?

@gaolaowai
Copy link

Random 2 cents: keep encoders stateless; I like how they're more "functional" and predictable in their behavior.

What would be any benefit in adding state? I'm not sure I can think of any.

@marty1885
Copy link
Member Author

@gaolaowai Thanks for taking interest.

My main motivation to have statfeul encoders is therefor we can have RDSE and SimHashEncoder. It's nice to have what HTM.core is offering. However I don't see myself using them in any way, shape or form; Grid Cells are a better solution to RDSE.

I prefer stateless encoders better too. Let's keep encoders stateless and add stateful ones when we really need one.

@marty1885
Copy link
Member Author

Updating on the subject. I'm making progress. But I'm more focused on delivering my project report so I can graduate. (The report is based on the current release, well, I don't have too much time on hand to think about the API)

@mewmew
Copy link
Contributor

mewmew commented Mar 20, 2020

Updating on the subject. I'm making progress. But I'm more focused on delivering my project report so I can graduate. (The report is based on the current release, well, I don't have too much time on hand to think about the API)

Hi @marty1885!

Thanks for sharing Etaler with the HTM community! I think I'll learn a lot just by diving into your code base and trying to understand how it works and gain further insight into the rationale between design decisions. I especially appreciate that you've written test cases, that you've verified by hand calculations.

Just to let you know, once you've handed in your report I'd be most curious to read it!

Wish you all the best and happy coding!

Cheers,
Robin

Edit: P.S. there's a minor typo in #59 (comment) (I think):

-Tensor y = F::globalInhibition(sp.compute(x), 0);
+Tensor y = F::globalInhibition(sp.compute(x), 0.1);

@marty1885
Copy link
Member Author

marty1885 commented Mar 21, 2020

I properly should find typos in variable names and values... Kinda expect them exist but should have done something.

Just to let you know, once you've handed in your report I'd be most curious to read it!

Thanks for you appreciation! 😸 My report is available on the forum here. There might be more typos in the report tho. English isn't my first language and (since it's just a BS graduation project) I've written it in a half-joking tone :(

Also uploaded to GitHub in case Google Drive filed in the future.
Hierarchical Temporal Memory Agent in standard Reinforcement Learning Environment.pdf


I have ongoing/proper research regarding Etaler and HTM that I hope I can share in the near future.

@marty1885
Copy link
Member Author

Design doc for myself:

Generalized autograd

I have being thinking about how exactly I'm going to build a sane functional API for Etaler. A big problem of Etaler is that the "layers" may use a sequence of tensor ops (ex: overlap then global inhibition) but learn using results tenors from later steps (overlap needs the result of global inhibition to learn). However I don't think NuPIC's Network API is a good approach either. Network API is very verbose and requires a lot of know-how to work. I gotta make a new system.

Mostly inspired by the autograd system from various DL libs. Throwing the calculus portion out of the window; autograd is a system sending information is later nodes in a DAG to earlier node. Which is almost what I need. But also presents a few problems.

  • autograd produces a DAG.
    • Not impossible to parallelize, but need special care
  • I don't need the entire chain as a single object. But all of it's part broken into sub chains.
  • Handling data across backends

It is easier to just build a HTM-native, autograd like system to handle it. But I don't want Etaler to be too specialized and the new system should still be able to function as autograd if it wants to. The new generalized autograd should

  • Work like autograd
  • We scan the DAG before the backward pass to segment it
  • There should be a way for a node to send output to later nodes but specify the later nodes have nothing to do with it.

I'm gonna keep workign on this. Then the code (from app dev's view) should be a lot cleaner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request need discussion need some discussion Spatial Pooling Regarding Spatial Pooling Temporal Memory Regarding Temporal Memory
Projects
None yet
Development

No branches or pull requests

3 participants