Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. #363

Open
pucha48 opened this issue Oct 30, 2020 · 2 comments
Labels

Comments

@pucha48
Copy link

pucha48 commented Oct 30, 2020

If you open a GitHub issue, here is the policy:

Your issue must be about one of the following:

  1. a bug,
  2. a feature request,
  3. a documentation issue, or
  4. a question that is specific to this SSD implementation.

You will only get help if you adhere to the following guidelines:

  • Before you open an issue, search the open and closed issues first. Your problem/question might already have been solved/answered before.
  • If you're getting unexpected behavior from code I wrote, open an issue and I'll try to help. If you're getting unexpected behavior from code you wrote, you'll have to fix it yourself. E.g. if you made a ton of changes to the code or the tutorials and now it doesn't work anymore, that's your own problem. I don't want to spend my time debugging your code.
  • Make sure you're using the latest master. If you're 30 commits behind and have a problem, the only answer you'll likely get is to pull the latest master and try again.
  • Read the documentation. All of it. If the answer to your problem/question can be found in the documentation, you might not get an answer, because, seriously, you could really have figured this out yourself.
  • If you're asking a question, it must be specific to this SSD implementation. General deep learning or object detection questions will likely get closed without an answer. E.g. a question like "How do I get the mAP of an SSD for my own dataset?" has nothing to do with this particular SSD implementation, because computing the mAP works the same way for any object detection model. You should ask such a question in an appropriate forum or on the Data Science section of StackOverflow instead.
  • If you get an error:
    • Provide the full stack trace of the error you're getting, not just the error message itself.
    • Make sure any code you post is properly formatted as such.
    • Provide any useful information about your environment, e.g.:
      • Operating System
      • Which commit of this repository you're on
      • Keras version
      • TensorFlow version
    • Provide a minimal reproducible example, i.e. post code and explain clearly how you ended up with this error.
    • Provide any useful information about your specific use case and parameters:
      • What model are you trying to use/train?
      • Describe the dataset you're using.
      • List the values of any parameters you changed that might be relevant.
@pucha48
Copy link
Author

pucha48 commented Oct 30, 2020

While training for single class, I am getting this error.
I have used following:
n_classes = 2 (1 background + 1 sheep(class COCO id : 19))

Shape of the 'conv4_3_norm_mbox_conf' weights:

kernel: (3, 3, 512, 8)
bias: (8,)

Error log:

ValueError Traceback (most recent call last)
in ()
10 validation_data=val_generator,
11 validation_steps=ceil(val_dataset_size/batch_size),
---> 12 initial_epoch=initial_epoch)

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1416 use_multiprocessing=use_multiprocessing,
1417 shuffle=shuffle,
-> 1418 initial_epoch=initial_epoch)
1419
1420 @interfaces.legacy_generator_methods_support

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
38
39 do_validation = bool(validation_data)
---> 40 model._make_train_function()
41 if do_validation:
42 model._make_test_function()

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training.py in _make_train_function(self)
507 training_updates = self.optimizer.get_updates(
508 params=self._collected_trainable_weights,
--> 509 loss=self.total_loss)
510 updates = (self.updates +
511 training_updates +

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/optimizers.py in get_updates(self, loss, params)
473 @interfaces.legacy_get_updates_support
474 def get_updates(self, loss, params):
--> 475 grads = self.get_gradients(loss, params)
476 self.updates = [K.update_add(self.iterations, 1)]
477

/media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/optimizers.py in get_gradients(self, loss, params)
89 grads = K.gradients(loss, params)
90 if None in grads:
---> 91 raise ValueError('An operation has None for gradient. '
92 'Please make sure that all of your ops have a '
93 'gradient defined (i.e. are differentiable). '

ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

@pucha48 pucha48 changed the title while using this repo. https://github.com/pierluigiferrari/ssd_keras/blob/master/ssd300_training.ipynb An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. Oct 30, 2020
@stale
Copy link

stale bot commented Dec 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Dec 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant