We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi there,
I am getting some differences when reading from BQ and writing to SQL Server. I tried locally, if i set
.master("local") the read dataframe as 160k rows and after the dataframe write i only get 80k in SQL Server table.
If i run the same with .master("local[*]") and get the same number in the read and write.
But when i run the code in the cluster --master "yarn" --deploy_mode "cluster" i am still getting differences.
Do you have any idea what is happening? It looks like some "partitions" are not being write.
Best regards.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi there,
I am getting some differences when reading from BQ and writing to SQL Server.
I tried locally, if i set
.master("local") the read dataframe as 160k rows and after the dataframe write i only get 80k in SQL Server table.
If i run the same with .master("local[*]") and get the same number in the read and write.
But when i run the code in the cluster
--master "yarn"
--deploy_mode "cluster"
i am still getting differences.
Do you have any idea what is happening?
It looks like some "partitions" are not being write.
Best regards.
The text was updated successfully, but these errors were encountered: