Skip to content
This repository has been archived by the owner on Nov 11, 2022. It is now read-only.

BigQueryIO.write fails when destination has partition decorator #620

Open
darshanmehta10 opened this issue Jan 23, 2018 · 1 comment
Open

Comments

@darshanmehta10
Copy link

darshanmehta10 commented Jan 23, 2018

Following is the code that writes to BigQuery:

BigQueryIO.writeTableRows()
 .to(destination)
 .withCreateDisposition(CREATE_IF_NEEDED)
 .withWriteDisposition(WRITE_APPEND)
 .withSchema(tableSchema)
 .expand(tableRows);

Here's the destination's implementation:

public TableDestination apply(ValueInSingleWindow<TableRow> input) {
 String partition = timestampExtractor.apply(input.getValue())
 .toString(DateTimeFormat.forPattern("yyyyMMdd").withZoneUTC());
 TableReference tableReference = new TableReference();
 tableReference.setDatasetId(dataset);
 tableReference.setProjectId(projectId);
 tableReference.setTableId(String.format("%s_%s", table, partition));
 log.debug("Will write to BigQuery table: %s", tableReference);
 return new TableDestination(tableReference, null);
}

When the dataflow tries to write to this table, I see the following message:

"errors" : [ {
 "domain" : "global",
 "message" : "Cannot read partition information from a table that is not partitioned: <project_id>:<dataset>.<table>$19730522",
 "reason" : "invalid"
 } ]

So, it looks like it's not creating tables with partition in the first place?

Apache beam version : 2.2.0

@santhh
Copy link

santhh commented Nov 23, 2018

Hello I am also seeing the same issue with 2.7. Is there any workaround?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants