You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained a model for sign language. Then I write down whole code of model in python. Now I want to know that how image preprocessing is being done on images and what are the arguments of compilation and training functions of classification model. But it shows that compilation info isn't available. Is there any way to get all of this info???
The text was updated successfully, but these errors were encountered:
When you say "image preprocessing", are you asking about data augmentation (like random rotations or flips)? I came here to see if anyone else had a similar question, because I'm also quite curious given some things I'm trying with Teachable Machine. The underlying MobileNet doesn't appear to preprocess data (like in a Sequential layer), and the source code here always has flipped=False flags in some of the functions. As far as I can tell, Teachable Machine is NOT augmenting the dataset, but I'm really not sure.
I trained a model for sign language. Then I write down whole code of model in python. Now I want to know that how image preprocessing is being done on images and what are the arguments of compilation and training functions of classification model. But it shows that compilation info isn't available. Is there any way to get all of this info???
The text was updated successfully, but these errors were encountered: