Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

microcontroller port feasibility #331

Open
street-grease-coder opened this issue Oct 24, 2020 · 3 comments
Open

microcontroller port feasibility #331

street-grease-coder opened this issue Oct 24, 2020 · 3 comments

Comments

@street-grease-coder
Copy link

is feasible to port this to an esp32 or similar (e.g. micropython) microcontroller environment?

@stubbb
Copy link

stubbb commented Oct 26, 2020

I would not ever attempt to use this on anything below 2 GB of RAM.

@street-grease-coder street-grease-coder changed the title microcontroller port fessability microcontroller port feasibility Oct 26, 2020
@street-grease-coder
Copy link
Author

would it feasible in your opinion to port it to e.g. int8 format (if I read correctly Tensorflow lite has some functionality built-in to allow easy transformation of weight matrices), so it requires only a fraction of RAM, with minimal performance losses?

@stubbb
Copy link

stubbb commented Oct 26, 2020

Well, I reach 8GB of RAM to get that 10 fps at the accuracy I need while smoking 30 Watts at the most power efficient AI board on the market. Even if it was technologically possible, you would end up with a very high processing time and very low accuracy.

The processing would likely be x100 longer at 2 Watts, and if you get down from 8 GB of RAM to, say 256 MB sequential, you are looking at another x100 processing time increase. Thats 1 frame every 100 seconds if it is even technologically possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants