Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_sequential dataset #61

Open
JJLimmm opened this issue Sep 27, 2022 · 9 comments
Open

train_sequential dataset #61

JJLimmm opened this issue Sep 27, 2022 · 9 comments

Comments

@JJLimmm
Copy link

JJLimmm commented Sep 27, 2022

Hi @smellslikeml ,

I have read through the README.md provided but would like to clarify on some things that are not mentioned in it.

  1. For the dataset, inside each subdirectory which is the label for which we want to classify the action, do we put in the sequence of images that constitute the action (eg: squatting) or do we only put in images of people in the squat position?
  2. Related to the 1st Qns above, are we able to put in more than 1 sequence of squatting if provided we need to put in a sequence of images?
  3. Do we only have to change the conf.py file when using train_sequential? what are the list of things we need to modify?

Thank you!

@smellslikeml
Copy link
Owner

smellslikeml commented Sep 27, 2022 via email

@JJLimmm
Copy link
Author

JJLimmm commented Sep 28, 2022

Hi @smellslikeml ,

Thanks for sharing more details on the work flow for this repo.
I have another question and that pertains to the classifier. Is the dataset preparation the same as to training for the LSTM? (eg: sequence of images rather than just images of the action) Or do i only need to include images of the action alone and the label for the actions as the folder name?

For preprocessing the dataset to output the csv file, the preprocess.py file seems like it is only for preparing the data for the LogisticRegression classifier and not for the LSTM. How did you prepare the data for training the LSTM model?

Thank you!

@JJLimmm
Copy link
Author

JJLimmm commented Sep 28, 2022

@smellslikeml Oh and also, for the classifier.sav model, what type of classifier are you using?

And if i want to classify more than 2 classes (eg: 5 classes: squats, lunge, walking, standing, sitting), what do i need to change to train a new classifier?

Thanks!

@cclauss
Copy link
Contributor

cclauss commented Sep 28, 2022

I am not a maintainer of this repo so please remove the @mention of my name.

@smellslikeml
Copy link
Owner

The .sav format was for saving models from the scikit-learn framework.
These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

@JJLimmm
Copy link
Author

JJLimmm commented Sep 29, 2022

The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

@smellslikeml
So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?)
if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image?

@mayorquinmachines
Copy link
Collaborator

mayorquinmachines commented Sep 29, 2022 via email

@JJLimmm
Copy link
Author

JJLimmm commented Sep 29, 2022

Yes, that's right - .sav is from scikit-learn, .h5 from tf.keras. If training a classifier from scikit-learn, you could use a sequence of pose estimations.

On Wed, Sep 28, 2022 at 9:03 PM JJ Lim | Eugene @.> wrote: The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying. You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py' @smellslikeml https://github.com/smellslikeml So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?) if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image? — Reply to this email directly, view it on GitHub <#61 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADEIHYSRSPE4FNVCLIBCJ5LWAUIJVANCNFSM6AAAAAAQWKCAHM . You are receiving this because you are subscribed to this thread.Message ID: @.>
-- Salma Mayorquin University of California, Berkeley Applied Mathematics (310) 977-9332 @.***

Hi @mayorquinmachines ,

Thanks for clarifying! But if i were to classify 5 classes (squats, lunges, walking,sitting, standing), wouldnt a sequence of images confuse the classifier if let's say i were to use the KNN Classifier from scikit-learn?

@smellslikeml
Copy link
Owner

smellslikeml commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants