Skip to content

Commit

Permalink
[CHANGE ME] Re-generated to pick up changes from googleapis. (#258)
Browse files Browse the repository at this point in the history
* fix: add resource definition for Table/ReadStream/WriteStream message
fix: add proper resource_reference for messages
chore: enable gapic v2 and proto annotation for bigquery/storage/v1alpha2 API.
committer: @xiaozhenliu-gg5

PiperOrigin-RevId: 310224144

Source-Author: Google APIs <noreply@google.com>
Source-Date: Wed May 6 14:09:13 2020 -0700
Source-Repo: googleapis/googleapis
Source-Sha: 30cfca094376e4904e32e71c838a81169fd4a2e2
Source-Link: googleapis/googleapis@30cfca0

* chore: enable gapic v2 and proto annotation for bigquery/storage/v1 API.

committer: @xiaozhenliu-gg5
PiperOrigin-RevId: 310225059

Source-Author: Google APIs <noreply@google.com>
Source-Date: Wed May 6 14:13:20 2020 -0700
Source-Repo: googleapis/googleapis
Source-Sha: c08dcec05ce1c181bcdbce59cabba36e0e541ff6
Source-Link: googleapis/googleapis@c08dcec

* chore: enable gapic v2 and proto annotation for bigquery/storage/v1beta2 API.

committer: @xiaozhenliu-gg5
PiperOrigin-RevId: 310239576

Source-Author: Google APIs <noreply@google.com>
Source-Date: Wed May 6 15:28:38 2020 -0700
Source-Repo: googleapis/googleapis
Source-Sha: 2fc2caaacb15949c7f80426bfc7dafdd41dbc333
Source-Link: googleapis/googleapis@2fc2caa

* fix: add pom dependencies (#259)

* add pom dependencies

* fix

* clean up v1beta1

Co-authored-by: Xiaozhen Liu <xiaozhenliu@google.com>
  • Loading branch information
yoshi-automation and xiaozhenliu-gg5 committed May 7, 2020
1 parent 252440a commit 5b99ddd
Show file tree
Hide file tree
Showing 30 changed files with 3,000 additions and 316 deletions.
Expand Up @@ -37,7 +37,7 @@
* <pre>
* <code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* String parent = "";
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
Expand Down Expand Up @@ -173,7 +173,7 @@ public BigQueryReadStub getStub() {
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* String parent = "";
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
Expand All @@ -191,6 +191,58 @@ public BigQueryReadStub getStub() {
* <p>Streams must be read starting from offset 0.
* @throws com.google.api.gax.rpc.ApiException if the remote call fails
*/
public final ReadSession createReadSession(
ProjectName parent, ReadSession readSession, int maxStreamCount) {
CreateReadSessionRequest request =
CreateReadSessionRequest.newBuilder()
.setParent(parent == null ? null : parent.toString())
.setReadSession(readSession)
.setMaxStreamCount(maxStreamCount)
.build();
return createReadSession(request);
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
/**
* Creates a new read session. A read session divides the contents of a BigQuery table into one or
* more streams, which can then be used to read data from the table. The read session also
* specifies properties of the data to be read, such as a list of columns or a push-down filter
* describing the rows to be returned.
*
* <p>A particular row can be read by at most one stream. When the caller has reached the end of
* each stream in the session, then all the data in the table has been read.
*
* <p>Data is assigned to each stream such that roughly the same number of rows can be read from
* each stream. Because the server-side unit for assigning data is collections of rows, the API
* does not guarantee that each stream will return the same number or rows. Additionally, the
* limits are enforced based on the number of pre-filtered rows, so some filters can lead to
* lopsided assignments.
*
* <p>Read sessions automatically expire 24 hours after they are created and do not require manual
* clean-up by the caller.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent.toString(), readSession, maxStreamCount);
* }
* </code></pre>
*
* @param parent Required. The request project that owns the session, in the form of
* `projects/{project_id}`.
* @param readSession Required. Session to be created.
* @param maxStreamCount Max initial number of streams. If unset or zero, the server will provide
* a value of streams so as to produce reasonable throughput. Must be non-negative. The number
* of streams may be lower than the requested number, depending on the amount parallelism that
* is reasonable for the table. Error will be returned if the max count is greater than the
* current system max limit of 1,000.
* <p>Streams must be read starting from offset 0.
* @throws com.google.api.gax.rpc.ApiException if the remote call fails
*/
public final ReadSession createReadSession(
String parent, ReadSession readSession, int maxStreamCount) {
CreateReadSessionRequest request =
Expand Down Expand Up @@ -225,7 +277,12 @@ public final ReadSession createReadSession(
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder().build();
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder()
* .setParent(parent.toString())
* .setReadSession(readSession)
* .build();
* ReadSession response = baseBigQueryReadClient.createReadSession(request);
* }
* </code></pre>
Expand Down Expand Up @@ -260,7 +317,12 @@ public final ReadSession createReadSession(CreateReadSessionRequest request) {
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder().build();
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder()
* .setParent(parent.toString())
* .setReadSession(readSession)
* .build();
* ApiFuture&lt;ReadSession&gt; future = baseBigQueryReadClient.createReadSessionCallable().futureCall(request);
* // Do something
* ReadSession response = future.get();
Expand All @@ -284,7 +346,10 @@ public final UnaryCallable<CreateReadSessionRequest, ReadSession> createReadSess
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ReadRowsRequest request = ReadRowsRequest.newBuilder().build();
* ReadStreamName readStream = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* ReadRowsRequest request = ReadRowsRequest.newBuilder()
* .setReadStream(readStream.toString())
* .build();
*
* ServerStream&lt;ReadRowsResponse&gt; stream = baseBigQueryReadClient.readRowsCallable().call(request);
* for (ReadRowsResponse response : stream) {
Expand Down Expand Up @@ -314,7 +379,10 @@ public final ServerStreamingCallable<ReadRowsRequest, ReadRowsResponse> readRows
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder().build();
* ReadStreamName name = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder()
* .setName(name.toString())
* .build();
* SplitReadStreamResponse response = baseBigQueryReadClient.splitReadStream(request);
* }
* </code></pre>
Expand Down Expand Up @@ -343,7 +411,10 @@ public final SplitReadStreamResponse splitReadStream(SplitReadStreamRequest requ
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder().build();
* ReadStreamName name = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder()
* .setName(name.toString())
* .build();
* ApiFuture&lt;SplitReadStreamResponse&gt; future = baseBigQueryReadClient.splitReadStreamCallable().futureCall(request);
* // Do something
* SplitReadStreamResponse response = future.get();
Expand Down
Expand Up @@ -30,7 +30,7 @@
* <pre>
* <code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* String parent = "";
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
Expand Down

0 comments on commit 5b99ddd

Please sign in to comment.