Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diego syncing unnecessarily twice #3506

Open
Benjamintf1 opened this issue Nov 3, 2023 · 1 comment
Open

Diego syncing unnecessarily twice #3506

Benjamintf1 opened this issue Nov 3, 2023 · 1 comment

Comments

@Benjamintf1
Copy link
Member

Benjamintf1 commented Nov 3, 2023

Issue

When running against mysql, timestamps do not have sub second resolution. This means that every single time a process is updated, it synchronizes with diego twice. The first time when the call is made, the second in the syncronization cycle. This is because the diego lrp and the data from capi's database on the timestamp will almost always differ. This doesn't seem to be the case for postgres based on capi unit tests, but havn't verified. This is not caught by the diego sync tests because we set the annotation for the tests by....using the value we've already stored in the database for the data syncing test. Given we're highly unlikely to update processes multiple times a second, I suggest we reduce the precision for this test to the second precision.

Context

Steps to Reproduce

Run CF.
Push an application.
See that capi updates diego twice.

Expected result

It should set it once

Current result

It updates twice.

process.updated_at: 1673287766.0
diego_lrp.annotation: 1673287766.4441805
$ while true; do cfdot desired-lrps | jq -r '.annotation + "\t" + (.routes["internal-router"] | last | .hostname)'; sleep 0.25; done

$ cf map-route dora apps.internal --hostname dora6
1673288114.5011706      dora6.apps.internal
1673288114.0    dora6.apps.internal

Possible Fixes

  • Update capi to have the correct precision in the database equal to diego's
  • Update capi to only send the value after it has gone through it's own database constraints
  • always record at a second interval as soon as it comes in the request.
  • fix our code here
    elsif process.updated_at.to_f.to_s != diego_lrp.annotation
    to have a better comparison
  • fix our code there to have 1 second "error bars"
@philippthun
Copy link
Member

As there was already another recent change to support microsecond timestamps (see #3484) and all the MySQL versions we are supporting already do support this, I would go for option 1.

As the processes table is much larger than the asg_timestamps table (which only has a single record), I'm unsure about the migration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants