Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to evaluate a "score" for models #1

Open
Sentdex opened this issue Nov 11, 2019 · 1 comment
Open

Way to evaluate a "score" for models #1

Sentdex opened this issue Nov 11, 2019 · 1 comment

Comments

@Sentdex
Copy link
Owner

Sentdex commented Nov 11, 2019

A model's overall success is determined not just by average accuracy, because some wrong classifications are worse than others.

Example:
Target: Right
Predicted: Left

is worse than:
Target: Right
Predicted: None

@rcox771
Copy link

rcox771 commented Nov 17, 2019

@Sentdex we might be able to get away with a Dense(1, activation='tanh') output and multiply
this, with some speed/acceleration scalars, directly to the x-position of your object on screen.

-1 would be "go left more", +1 would be "go right more", and outputs closer to 0 would do nothing or return to center. This way, Left <-> Right misses would be more distant than Right<->None or Left<->None misses.

You could expand on this to do xy translations with a Dense(2, activation='tanh'), xyz's with a Dense(3, activation='tanh'), etc.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants