Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

graphql IDL not syncing up to all causal cluster members #174

Open
navarants opened this issue Apr 3, 2019 · 6 comments
Open

graphql IDL not syncing up to all causal cluster members #174

navarants opened this issue Apr 3, 2019 · 6 comments

Comments

@navarants
Copy link

I'm having an issue getting the graphQL IDL to sync up to all causal cluster members in a timely manner.
Loading the IDL using call graphql.idl()
Immediately after loading the IDL i can go to all causal cluster members and see the IDL representation changed using call graphql.schema().

Even though the new IDL is represented on all members correctly i cannot actually run the graphQL queries on those members.
Sometimes the sync eventually happens after several hours and it will start working.
In other cases i have to restart the cluster members for the update to occur.

plugin version: 3.4.0.1
neo4j version: 3.4.11

error example with graphQL request:

"message": "Validation error of type FieldUndefined: Field 'contactSms' in type 'Account' is undefined",
"locations": [
{
"line": 4,
"column": 5
}

error example in neo4j browser

Neo.ClientError.Procedure.ProcedureCallFailed: Failed to invoke procedure graphql.query: Caused by: java.lang.RuntimeException: Error executing GraphQL Query:
ValidationError{validationErrorType=FieldUndefined, message=Validation error of type FieldUndefined: Field 'contactSms' in type 'Account' is undefined, locations=[SourceLocation{line=4, column=7}], description='Field 'contactSms' in type 'Account' is undefined'}

@terryf82
Copy link
Contributor

Experiencing similar behavior on Enterprise 3.5.3 with plugin version 3.5.0.2.

@jexp
Copy link
Contributor

jexp commented Apr 17, 2019

Sorry that you had these issues. I added changes both for 3.4 and 3.5 and will do a release soon.

If you want to test this pre-release, here are the jars:
It should work automatically, I added a grace period of 10s hope thats ok.

@terryf82
Copy link
Contributor

Schema changes now appear to propagating across to other core servers and replica servers quickly, thanks for your help.

@jexp jexp closed this as completed Jun 19, 2019
@terryf82
Copy link
Contributor

Hi Michael, I tried contacting you a few weeks ago via the slack channel about this. After our initial testing, we are now in a situation where schema changes to the leader of a cluster don't propagate out to the replicas.

We have been trying to understand what might be causing this to happen. One idea we have is that it might be because of connectivity issues between the leader and replicas, which would be strange as data changes do propagate out properly. But for my own curiosity, can you explain what protocol / port is used for the schema propagation, is it via the raft protocol?

@jexp
Copy link
Contributor

jexp commented Jun 20, 2019

If the graph data itself propagates, then the schema should also propagate.
We store it in graph metadata that's not accessible via cypher and also an timestamp when it was last updated.

Could it be perhaps that the clocks are out of sync of the different instances?

@jexp jexp reopened this Jun 20, 2019
@terryf82
Copy link
Contributor

The clocks appear to be in sync. We are running the cluster inside of kubernetes, and the time looks consistent across the nodes and individual pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants