Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop end-to-end testing suite for all full system testing in Federated Learning #37

Open
9 tasks
cereallarceny opened this issue Feb 3, 2020 · 0 comments
Labels
Priority: 2 - High 😰 Should be fixed as quickly as possible, ideally within the current or following sprint Severity: 3 - Medium 😒 Does not cause a failure, impair usability, or interfere with the system Status: Available 👋 Available for assignment, who wants it? Type: Epic 🤙 Describes a large amount of functionality that will likely be broken down into smaller issues

Comments

@cereallarceny
Copy link
Member

cereallarceny commented Feb 3, 2020

This issue doesn't need to live in syft-proto, but it must exist as some sort of a Github issue somewhere. This project is probably the best central location for it for now.

This issue describes how we will need to develop a testing suite that can run span multiple repositories, languages, and environments. We want to ensure that our worker libraries (syft.js, KotlinSyft, SwiftSyft, and PySyft's FL Worker) are always working properly with PyGrid. Likewise, we'll need to test this on various versions of syft-proto and Threepio to ensure that our assisting libraries pair nicely with the other various projects. We need to do roughly the following:

  • Create a central Dockerfile to provision and organize all our necessary resources
  • Create model, training plan, and averaging plan and upload them to PyGrid instance
  • Allow syft.js to participate in a round of training
  • Write tests to ensure that PyGrid is updating the model from syft.js round
  • Allow PySyft FL Worker to participate in a round of training
  • Write tests to ensure that PyGrid is updating the model from PySyft FL Worker round

We should ideally cover the following cases, at the minimum:

  • Is the round able to start?
  • Is the round able to take reports?
  • How do we manage a variety of different server_config's?
  • ... others we should have? Let's open a discussion in the comments below.

Update 29/05/2020 - We do not currently have any plans to include KotlinSyft or SwiftSyft in our end-to-end testing because of their dependence on a live device or an emulator. Until we have the ability to implement these libraries in a "headless" context, they will not be practical to test end-to-end. We're open to challenging this thought and would love to implement them, but don't see a clear path at the moment.

@cereallarceny cereallarceny created this issue from a note in Model-centric Federated Learning (To do) Feb 3, 2020
@cereallarceny cereallarceny added the Type: Epic 🤙 Describes a large amount of functionality that will likely be broken down into smaller issues label Feb 3, 2020
@cereallarceny cereallarceny added the Status: Blocked ✖️ Cannot work on this because of some other incomplete work label Feb 5, 2020
@cereallarceny cereallarceny moved this from To do to Backlog in Model-centric Federated Learning Feb 5, 2020
@iamtrask iamtrask added the gsoc label Feb 27, 2020
@cereallarceny cereallarceny removed the Status: Blocked ✖️ Cannot work on this because of some other incomplete work label Feb 28, 2020
@cereallarceny cereallarceny changed the title Develop end-to-end testing suite for all full system testing in Federated Learning GSoC Project: Develop end-to-end testing suite for all full system testing in Federated Learning Feb 28, 2020
@cereallarceny cereallarceny moved this from Backlog to To do in Model-centric Federated Learning Mar 2, 2020
@cereallarceny cereallarceny moved this from To do to Backlog in Model-centric Federated Learning May 26, 2020
@cereallarceny cereallarceny changed the title GSoC Project: Develop end-to-end testing suite for all full system testing in Federated Learning Develop end-to-end testing suite for all full system testing in Federated Learning May 26, 2020
@cereallarceny cereallarceny moved this from Backlog to To do in Model-centric Federated Learning May 28, 2020
@cereallarceny cereallarceny moved this from To do to Backlog in Model-centric Federated Learning May 28, 2020
@cereallarceny cereallarceny added Priority: 2 - High 😰 Should be fixed as quickly as possible, ideally within the current or following sprint Severity: 3 - Medium 😒 Does not cause a failure, impair usability, or interfere with the system Status: Available 👋 Available for assignment, who wants it? labels May 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: 2 - High 😰 Should be fixed as quickly as possible, ideally within the current or following sprint Severity: 3 - Medium 😒 Does not cause a failure, impair usability, or interfere with the system Status: Available 👋 Available for assignment, who wants it? Type: Epic 🤙 Describes a large amount of functionality that will likely be broken down into smaller issues
Development

No branches or pull requests

3 participants