Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sample: Update sample and readme. #77

Merged
merged 1 commit into from Feb 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 4 additions & 4 deletions samples/README.md
Expand Up @@ -27,7 +27,7 @@ PUBSUBLITE_SPARK_SQL_STREAMING_JAR_LOCATION= # downloaded pubsublite-spark-sql-s

To run the word count sample in Dataproc cluster, follow the steps:

1. `cd samples/`
1. `cd samples/snippets`
2. Set the current sample version.
```sh
SAMPLE_VERSION=$(mvn -q \
Expand Down Expand Up @@ -59,7 +59,7 @@ To run the word count sample in Dataproc cluster, follow the steps:
7. Create GCS bucket and upload both `pubsublite-spark-sql-streaming-$CONNECTOR_VERSION-with-dependencies.jar` and the sample jar onto GCS
```sh
gsutil mb $BUCKET
gsutil cp snapshot/target/pubsublite-spark-snapshot-$SAMPLE_VERSION.jar $BUCKET
gsutil cp target/pubsublite-spark-snippets-$SAMPLE_VERSION.jar $BUCKET
gsutil cp $PUBSUBLITE_SPARK_SQL_STREAMING_JAR_LOCATION $BUCKET
```
8. Set Dataproc region
Expand All @@ -70,14 +70,14 @@ To run the word count sample in Dataproc cluster, follow the steps:
9. Run the sample in Dataproc. You would see the word count result show up in the console output.
```sh
gcloud dataproc jobs submit spark --cluster=$CLUSTER_NAME \
--jars=$BUCKET/pubsublite-spark-snapshot-$SAMPLE_VERSION.jar,$BUCKET/pubsublite-spark-sql-streaming-$CONNECTOR_VERSION-with-dependencies.jar \
--jars=$BUCKET/pubsublite-spark-snippets-$SAMPLE_VERSION.jar,$BUCKET/pubsublite-spark-sql-streaming-$CONNECTOR_VERSION-with-dependencies.jar \
--class=pubsublite.spark.WordCount -- $SUBSCRIPTION_PATH
```

## Cleaning up
1. Delete Pub/Sub Lite topic and subscription.
```sh
gcloud pubsub lite-subscriptions delete $SUBSCRIPTION_ID --zone=$REGION-$ZONE_ID=
gcloud pubsub lite-subscriptions delete $SUBSCRIPTION_ID --zone=$REGION-$ZONE_ID
gcloud pubsub lite-topics delete $TOPIC_ID --zone=$REGION-$ZONE_ID
```
2. Delete GCS bucket.
Expand Down
16 changes: 12 additions & 4 deletions samples/snippets/src/main/java/pubsublite/spark/AdminUtils.java
Expand Up @@ -85,8 +85,12 @@ public static void createTopicExample(
try (AdminClient adminClient = AdminClient.create(adminClientSettings)) {
Topic response = adminClient.createTopic(topic).get();
System.out.println(response.getAllFields() + "created successfully.");
} catch (AlreadyExistsException e) {
System.out.println(topicPath + " already exists");
} catch (ExecutionException e) {
if (e.getCause() instanceof AlreadyExistsException) {
System.out.println(topicPath + " already exists");
} else {
throw e;
}
}
}

Expand Down Expand Up @@ -130,8 +134,12 @@ public static void createSubscriptionExample(
try (AdminClient adminClient = AdminClient.create(adminClientSettings)) {
Subscription response = adminClient.createSubscription(subscription).get();
System.out.println(response.getAllFields() + "created successfully.");
} catch (AlreadyExistsException e) {
System.out.println(subscriptionPath + " already exists");
} catch (ExecutionException e) {
if (e.getCause() instanceof AlreadyExistsException) {
System.out.println(topicPath + " already exists");
} else {
throw e;
}
}
}

Expand Down
Expand Up @@ -70,5 +70,7 @@ public static void main(String[] args) throws Exception {
createSubscriptionExample(cloudRegion, zoneId, projectNumber, topicId, subscriptionId);

publisherExample(cloudRegion, zoneId, projectNumber, topicId, words);

System.exit(0);
}
}