Skip to content

Commit

Permalink
feat: updates to Write API v1beta2 public interface, migrate to Java …
Browse files Browse the repository at this point in the history
…microgenerator (#728)

* chore: migrate java-bigquerystorage to the Java microgenerator

Committer: @miraleung
PiperOrigin-RevId: 345311069

Source-Author: Google APIs <noreply@google.com>
Source-Date: Wed Dec 2 14:17:15 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: e39e42f368d236203a774ee994fcb4d730c33a83
Source-Link: googleapis/googleapis@e39e42f

* feat!: Updates to BigQuery Write API V1Beta2 public interface. This includes breaking changes to the API, it is fine because the API is not officially launched yet.

PiperOrigin-RevId: 345469340

Source-Author: Google APIs <noreply@google.com>
Source-Date: Thu Dec 3 09:33:11 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: b53c4d98aab1eae3dac90b37019dede686782f13
Source-Link: googleapis/googleapis@b53c4d9

* fix: Update gapic-generator-java to 0.0.7

Committer: @miraleung
PiperOrigin-RevId: 345476969

Source-Author: Google APIs <noreply@google.com>
Source-Date: Thu Dec 3 10:07:32 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: 7be2c821dd88109038c55c89f7dd48f092eeab9d
Source-Link: googleapis/googleapis@7be2c82

* chore: rollback migrating java-bigquerystorage to the Java microgenerator

Committer: @miraleung
PiperOrigin-RevId: 345522380

Source-Author: Google APIs <noreply@google.com>
Source-Date: Thu Dec 3 13:28:07 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: f8f975c7d43904e90d6c5f1684fdb6804400e641
Source-Link: googleapis/googleapis@f8f975c

* chore: migrate java-bigquerystorage to the Java microgenerator

Committer: @miraleung
PiperOrigin-RevId: 346405446

Source-Author: Google APIs <noreply@google.com>
Source-Date: Tue Dec 8 14:03:11 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: abc43060f136ce77124754a48f367102e646844a
Source-Link: googleapis/googleapis@abc4306

* chore: update gapic-generator-java to 0.0.11

Committer: @miraleung
PiperOrigin-RevId: 347036369

Source-Author: Google APIs <noreply@google.com>
Source-Date: Fri Dec 11 11:13:47 2020 -0800
Source-Repo: googleapis/googleapis
Source-Sha: 6d65640b1fcbdf26ea76cb720de0ac138cae9bed
Source-Link: googleapis/googleapis@6d65640

Co-authored-by: Stephanie Wang <stephaniewang526@users.noreply.github.com>
Co-authored-by: stephwang <stephwang@google.com>
  • Loading branch information
3 people committed Dec 17, 2020
1 parent c32a86a commit 2fc5968
Show file tree
Hide file tree
Showing 83 changed files with 6,730 additions and 3,802 deletions.
46 changes: 0 additions & 46 deletions google-cloud-bigquerystorage/clirr-ignored-differences.xml

This file was deleted.

Expand Up @@ -5,14 +5,15 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package com.google.cloud.bigquery.storage.v1;

import com.google.api.core.BetaApi;
Expand All @@ -25,7 +26,7 @@
import java.util.concurrent.TimeUnit;
import javax.annotation.Generated;

// AUTO-GENERATED DOCUMENTATION AND SERVICE
// AUTO-GENERATED DOCUMENTATION AND CLASS.
/**
* Service Description: BigQuery Read API.
*
Expand All @@ -34,18 +35,7 @@
* <p>This class provides the ability to make remote calls to the backing service through method
* calls that map to API methods. Sample code to get started:
*
* <pre>
* <code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
* }
* </code>
* </pre>
*
* <p>Note: close() needs to be called on the baseBigQueryReadClient object to clean up resources
* <p>Note: close() needs to be called on the BaseBigQueryReadClient object to clean up resources
* such as threads. In the example above, try-with-resources is used, which automatically calls
* close().
*
Expand Down Expand Up @@ -74,30 +64,28 @@
*
* <p>To customize credentials:
*
* <pre>
* <code>
* <pre>{@code
* BaseBigQueryReadSettings baseBigQueryReadSettings =
* BaseBigQueryReadSettings.newBuilder()
* .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
* .build();
* BaseBigQueryReadClient baseBigQueryReadClient =
* BaseBigQueryReadClient.create(baseBigQueryReadSettings);
* </code>
* </pre>
* }</pre>
*
* To customize the endpoint:
* <p>To customize the endpoint:
*
* <pre>
* <code>
* <pre>{@code
* BaseBigQueryReadSettings baseBigQueryReadSettings =
* BaseBigQueryReadSettings.newBuilder().setEndpoint(myEndpoint).build();
* BaseBigQueryReadClient baseBigQueryReadClient =
* BaseBigQueryReadClient.create(baseBigQueryReadSettings);
* </code>
* </pre>
* }</pre>
*
* <p>Please refer to the GitHub repository's samples for more quickstart code snippets.
*/
@Generated("by gapic-generator")
@BetaApi
@Generated("by gapic-generator")
public class BaseBigQueryReadClient implements BackgroundResource {
private final BaseBigQueryReadSettings settings;
private final BigQueryReadStub stub;
Expand All @@ -118,7 +106,7 @@ public static final BaseBigQueryReadClient create(BaseBigQueryReadSettings setti

/**
* Constructs an instance of BaseBigQueryReadClient, using the given stub for making calls. This
* is for advanced usage - prefer to use BaseBigQueryReadSettings}.
* is for advanced usage - prefer using create(BaseBigQueryReadSettings).
*/
@BetaApi("A restructuring of stub classes is planned, so this may break in the future")
public static final BaseBigQueryReadClient create(BigQueryReadStub stub) {
Expand Down Expand Up @@ -150,7 +138,7 @@ public BigQueryReadStub getStub() {
return stub;
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Creates a new read session. A read session divides the contents of a BigQuery table into one or
* more streams, which can then be used to read data from the table. The read session also
Expand All @@ -169,17 +157,6 @@ public BigQueryReadStub getStub() {
* <p>Read sessions automatically expire 24 hours after they are created and do not require manual
* clean-up by the caller.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent, readSession, maxStreamCount);
* }
* </code></pre>
*
* @param parent Required. The request project that owns the session, in the form of
* `projects/{project_id}`.
* @param readSession Required. Session to be created.
Expand All @@ -202,7 +179,7 @@ public final ReadSession createReadSession(
return createReadSession(request);
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Creates a new read session. A read session divides the contents of a BigQuery table into one or
* more streams, which can then be used to read data from the table. The read session also
Expand All @@ -221,17 +198,6 @@ public final ReadSession createReadSession(
* <p>Read sessions automatically expire 24 hours after they are created and do not require manual
* clean-up by the caller.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* int maxStreamCount = 0;
* ReadSession response = baseBigQueryReadClient.createReadSession(parent.toString(), readSession, maxStreamCount);
* }
* </code></pre>
*
* @param parent Required. The request project that owns the session, in the form of
* `projects/{project_id}`.
* @param readSession Required. Session to be created.
Expand All @@ -254,7 +220,7 @@ public final ReadSession createReadSession(
return createReadSession(request);
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Creates a new read session. A read session divides the contents of a BigQuery table into one or
* more streams, which can then be used to read data from the table. The read session also
Expand All @@ -273,28 +239,14 @@ public final ReadSession createReadSession(
* <p>Read sessions automatically expire 24 hours after they are created and do not require manual
* clean-up by the caller.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder()
* .setParent(parent.toString())
* .setReadSession(readSession)
* .build();
* ReadSession response = baseBigQueryReadClient.createReadSession(request);
* }
* </code></pre>
*
* @param request The request object containing all of the parameters for the API call.
* @throws com.google.api.gax.rpc.ApiException if the remote call fails
*/
public final ReadSession createReadSession(CreateReadSessionRequest request) {
return createReadSessionCallable().call(request);
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Creates a new read session. A read session divides the contents of a BigQuery table into one or
* more streams, which can then be used to read data from the table. The read session also
Expand All @@ -314,26 +266,12 @@ public final ReadSession createReadSession(CreateReadSessionRequest request) {
* clean-up by the caller.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ProjectName parent = ProjectName.of("[PROJECT]");
* ReadSession readSession = ReadSession.newBuilder().build();
* CreateReadSessionRequest request = CreateReadSessionRequest.newBuilder()
* .setParent(parent.toString())
* .setReadSession(readSession)
* .build();
* ApiFuture&lt;ReadSession&gt; future = baseBigQueryReadClient.createReadSessionCallable().futureCall(request);
* // Do something
* ReadSession response = future.get();
* }
* </code></pre>
*/
public final UnaryCallable<CreateReadSessionRequest, ReadSession> createReadSessionCallable() {
return stub.createReadSessionCallable();
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Reads rows from the stream in the format prescribed by the ReadSession. Each response contains
* one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to
Expand All @@ -343,26 +281,12 @@ public final UnaryCallable<CreateReadSessionRequest, ReadSession> createReadSess
* stream.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ReadStreamName readStream = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* ReadRowsRequest request = ReadRowsRequest.newBuilder()
* .setReadStream(readStream.toString())
* .build();
*
* ServerStream&lt;ReadRowsResponse&gt; stream = baseBigQueryReadClient.readRowsCallable().call(request);
* for (ReadRowsResponse response : stream) {
* // Do something when receive a response
* }
* }
* </code></pre>
*/
public final ServerStreamingCallable<ReadRowsRequest, ReadRowsResponse> readRowsCallable() {
return stub.readRowsCallable();
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Splits a given `ReadStream` into two `ReadStream` objects. These `ReadStream` objects are
* referred to as the primary and the residual streams of the split. The original `ReadStream` can
Expand All @@ -375,26 +299,14 @@ public final ServerStreamingCallable<ReadRowsRequest, ReadRowsResponse> readRows
* original[0-j] = primary[0-j] and original[j-n] = residual[0-m] once the streams have been read
* to completion.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ReadStreamName name = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder()
* .setName(name.toString())
* .build();
* SplitReadStreamResponse response = baseBigQueryReadClient.splitReadStream(request);
* }
* </code></pre>
*
* @param request The request object containing all of the parameters for the API call.
* @throws com.google.api.gax.rpc.ApiException if the remote call fails
*/
public final SplitReadStreamResponse splitReadStream(SplitReadStreamRequest request) {
return splitReadStreamCallable().call(request);
}

// AUTO-GENERATED DOCUMENTATION AND METHOD
// AUTO-GENERATED DOCUMENTATION AND METHOD.
/**
* Splits a given `ReadStream` into two `ReadStream` objects. These `ReadStream` objects are
* referred to as the primary and the residual streams of the split. The original `ReadStream` can
Expand All @@ -408,18 +320,6 @@ public final SplitReadStreamResponse splitReadStream(SplitReadStreamRequest requ
* to completion.
*
* <p>Sample code:
*
* <pre><code>
* try (BaseBigQueryReadClient baseBigQueryReadClient = BaseBigQueryReadClient.create()) {
* ReadStreamName name = ReadStreamName.of("[PROJECT]", "[LOCATION]", "[SESSION]", "[STREAM]");
* SplitReadStreamRequest request = SplitReadStreamRequest.newBuilder()
* .setName(name.toString())
* .build();
* ApiFuture&lt;SplitReadStreamResponse&gt; future = baseBigQueryReadClient.splitReadStreamCallable().futureCall(request);
* // Do something
* SplitReadStreamResponse response = future.get();
* }
* </code></pre>
*/
public final UnaryCallable<SplitReadStreamRequest, SplitReadStreamResponse>
splitReadStreamCallable() {
Expand Down

0 comments on commit 2fc5968

Please sign in to comment.