Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Callback for PegasosQSVC #599

Open
tjdurant opened this issue Apr 7, 2023 · 4 comments
Open

[Enhancement] Callback for PegasosQSVC #599

tjdurant opened this issue Apr 7, 2023 · 4 comments
Labels

Comments

@tjdurant
Copy link

tjdurant commented Apr 7, 2023

What should we add?

Hello, I'm new to the Qiskit community. I was wondering if it would be possible to add a callback function that allows users to monitor the objective function during training of PegasosQSVC - similar to what is available with VQC.

Happy to try and work on that if given some direction.

Thanks,
T

@tjdurant tjdurant added the type: feature request 💡 New feature or request label Apr 7, 2023
@adekusar-drl
Copy link
Collaborator

Hello @tjdurant sorry for the delay and thanks for the interest in Qiskit. Yes, it is possible, but there's not that much can be exposed in such a callback. What is available:

  • iteration number
  • weighted sum over support vectors
  • and a dict of alphas

Do you know what you would like to see in the callback?

In general, if we were to add a callback then we would need:

  • design the callback interface, e.g. a function like def pegasos_callback(iter_num: int, weighted_sum: float, alphas: Dict[int, int])
  • extend the constructor and add a new parameter called callback that is a callable as suggest above.
  • call the callback in fit
  • add unit tests
  • add documentations
  • may be it is worth extending the pegasos tutorial, but this can be done separately.

@tjdurant
Copy link
Author

@adekusar-drl , no worries!

I think that the main thing I would want to see is the objective function value. Similar to train_loss and val_loss in traditional ML libraries.

Sounds like that might be a reach at this point, though - I'd be happy to close this and wait until we're further down the road but defer to you and your thoughts on it.

@adekusar-drl
Copy link
Collaborator

As I can see from the code objective function is not evaluated directly. But I'm not very well familiar with the algorithm. So, if you feel confident, you may extend the implementation.

@gentinettagian
Copy link
Contributor

@tjdurant The main advantage of PevasosQSVC is that in every iteration only one data point is "classified" as the algorithm is based on stochastic gradient descent. In contrast to classical ML methods, evaluating the objective function on the whole training/validation set is quite expensive. Hence, calculating the train/validation loss after every iteration would slow down training drastically. Of course this is still something that would be good to have for testing/creating plots. However, if we implement a callback that provides the current loss values, these calculations should be optional and only performed if they are indeed needed for the callback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants