Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent duplicate audits #139

Open
jeffpaul opened this issue Jun 26, 2018 · 6 comments
Open

Prevent duplicate audits #139

jeffpaul opened this issue Jun 26, 2018 · 6 comments
Labels

Comments

@jeffpaul
Copy link
Member

jeffpaul commented Jun 26, 2018

Description

There appears to be an issue with the Firestore integration locks as we're seeing instances of duplicate audits of the same plugin checksum (in some cases more than two audits are run), so we'll want to review the locking to ensure its catching all edge cases to eliminate this waste of resources. Some examples...
Reply Comment to Email:
https://wptide.org/api/tide/v1/audit/51026
https://wptide.org/api/tide/v1/audit/51027
Feedburner Right Now Stats:
https://wptide.org/api/tide/v1/audit/51020
https://wptide.org/api/tide/v1/audit/51021
https://wptide.org/api/tide/v1/audit/51022

Steps to Reproduce

  1. Query for plugin results via checksum
  2. See multiple results
  3. 💩

Expected behavior: [What you expect to happen]
Only a single audit results for each plugin checksum

Actual behavior: [What actually happens]
Multiple cases of duplicate (or more) audit results for a single plugin checksum

Reproduces how often: [What percentage of the time does it reproduce?]
I haven't run a full population analysis of audit results, but its significant enough to be a concern.

Additional info:
It seems like an issue in Firestore not locking the record fast enough, we might have to switch to Pub/Sub.

@jeffpaul
Copy link
Member Author

Per bugscrub today, we're keeping this in the 1.0.0 release.

@jeffpaul jeffpaul added this to the 1.0.0-beta2 milestone Nov 26, 2018
@jeffpaul
Copy link
Member Author

Note that this issue was referenced in Slack.

@jeffpaul
Copy link
Member Author

Per discussion in Slack, we're punting this to Future Release as this is technically challenging and may prove to be more effort than we can commit to for 1.0.0.

@jeffpaul jeffpaul removed this from the 1.0.0-beta2 milestone Nov 28, 2018
@valendesigns
Copy link
Contributor

This one is super frustrating and hard to test locally. It's really obvious in a cloud environment where the Kubernetes pods are hitting the message queue so fast that it seems like 1-3 pods pick it up at the near same time.

@valendesigns valendesigns added this to the 1.0.0 milestone Mar 1, 2019
@jeffpaul
Copy link
Member Author

@rheinardkorf any chance you'd be able to work on a PR for this?

@jeffpaul
Copy link
Member Author

Punting this to Future Release per today's Tidechat discussion.

@jeffpaul jeffpaul modified the milestones: 1.0.0, Future Release Oct 29, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants