Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
anguillanneuf committed Jan 27, 2021
1 parent a071ddd commit 6ecde06
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 44 deletions.
29 changes: 10 additions & 19 deletions .readme-partials.yaml
Expand Up @@ -98,22 +98,13 @@ custom_content: |
spark.readStream.format("pubsublite").option("gcp.credentials.key", "<SERVICE_ACCOUNT_JSON_IN_BASE64>")
```
about: |
[Google Cloud Pub/Sub Lite][product-docs] is designed to provide reliable,
many-to-many, asynchronous messaging between applications. Publisher
applications can send messages to a topic and other applications can
subscribe to that topic to receive the messages. By decoupling senders and
receivers, Google Cloud Pub/Sub allows developers to communicate between
independently written applications.
Compared to Google Pub/Sub, Pub/Sub Lite provides partitioned zonal data
storage with predefined capacity. Both products present a similar API, but
Pub/Sub Lite has more usage caveats.
See the [Pub/Sub Lite docs](https://cloud.google.com/pubsub/quickstart-console#before-you-begin) for more details on how to activate
Pub/Sub Lite for your project, as well as guidance on how to choose between
Cloud Pub/Sub and Pub/Sub Lite.
See the [Pub/Sub Lite client library docs][javadocs] to learn how to
use this Pub/Sub Lite Client Library.
The Pub/Sub Lite Spark connector ...
[Google Cloud Pub/Sub Lite][product-docs] is a zonal, real-time messaging
service that lets you send and receive messages between independent
applications. You can manually configure the throughput and storage capacity
for Pub/Sub Lite systems.
The Pub/Sub Lite Spark connector supports Pub/Sub Lite as an input source to
Apache Spark Structured Streaming in both the default micro-batch processing
mode and the _experimental_ continous processing mode. The connector works in
all Apache Spark distributions, including [Google Cloud Dataproc](https://cloud.google.com/dataproc/docs/), Databricks,
and manual Spark installations.
4 changes: 2 additions & 2 deletions .repo-metadata.json
Expand Up @@ -2,7 +2,7 @@
"name": "pubsublite-spark",
"name_pretty": "Pub/Sub Lite Spark Connector",
"product_documentation": "https://cloud.google.com/pubsub/lite/docs",
"api_description": "is designed to provide reliable,\nmany-to-many, asynchronous messaging between applications. Publisher\napplications can send messages to a topic and other applications can\nsubscribe to that topic to receive the messages. By decoupling senders and\nreceivers, Google Cloud Pub/Sub allows developers to communicate between\nindependently written applications.\n\nCompared to Google Pub/Sub, Pub/Sub Lite provides partitioned zonal data\nstorage with predefined capacity. Both products present a similar API, but\nPub/Sub Lite has more usage caveats.\n\nSee the [Google Pub/Sub Lite docs](https://cloud.google.com/pubsub/quickstart-console#before-you-begin) for more details on how to activate\nPub/Sub Lite for your project, as well as guidance on how to choose between\nCloud Pub/Sub and Pub/Sub Lite.",
"api_description": "Pub/Sub Lite is a zonal, real-time messaging service that lets you send and receive messages between independent applications. You can manually configure the throughput and storage capacity for Pub/Sub Lite systems.",
"client_documentation": "https://googleapis.dev/java/google-cloud-pubsublite/latest/index.html",
"release_level": "alpha",
"transport": "grpc",
Expand All @@ -11,7 +11,7 @@
"min_java_version": 8,
"repo": "googleapis/java-pubsublite-spark",
"repo_short": "java-pubsublite-spark",
"distribution_name": "com.google.cloud:pubsublite-spark",
"distribution_name": "com.google.cloud:pubsublite-spark-sql-streaming",
"codeowner_team": "@googleapis/api-pubsub",
"api_id": "pubsublite.googleapis.com"
}
39 changes: 16 additions & 23 deletions README.md
Expand Up @@ -19,19 +19,19 @@ If you are using Maven, add this to your pom.xml file:
```xml
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>pubsublite-spark</artifactId>
<artifactId>pubsublite-spark-sql-streaming</artifactId>
<version>0.0.0</version>
</dependency>
```

If you are using Gradle without BOM, add this to your dependencies
```Groovy
compile 'com.google.cloud:pubsublite-spark:0.0.0'
compile 'com.google.cloud:pubsublite-spark-sql-streaming:0.0.0'
```

If you are using SBT, add this to your dependencies
```Scala
libraryDependencies += "com.google.cloud" % "pubsublite-spark" % "0.0.0"
libraryDependencies += "com.google.cloud" % "pubsublite-spark-sql-streaming" % "0.0.0"
```

## Authentication
Expand All @@ -50,29 +50,22 @@ You will need to [enable billing][enable-billing] to use Google Pub/Sub Lite Spa

### Installation and setup

You'll need to obtain the `pubsublite-spark` library. See the [Quickstart](#quickstart) section
to add `pubsublite-spark` as a dependency in your code.
You'll need to obtain the `pubsublite-spark-sql-streaming` library. See the [Quickstart](#quickstart) section
to add `pubsublite-spark-sql-streaming` as a dependency in your code.

## About Pub/Sub Lite Spark Connector

[Google Cloud Pub/Sub Lite][product-docs] is a zonal, real-time messaging
service that lets you send and receive messages between independent
applications. You can manually configure the throughput and storage capacity
for Pub/Sub Lite systems.

[Pub/Sub Lite Spark Connector][product-docs] is designed to provide reliable,
many-to-many, asynchronous messaging between applications. Publisher
applications can send messages to a topic and other applications can
subscribe to that topic to receive the messages. By decoupling senders and
receivers, Google Cloud Pub/Sub allows developers to communicate between
independently written applications.
The Pub/Sub Lite Spark connector supports Pub/Sub Lite as an input source to
Apache Spark Structured Streaming in both the default micro-batch processing
mode and the _experimental_ continous processing mode. The connector works in
all Apache Spark distributions, including [Google Cloud Dataproc](https://cloud.google.com/dataproc/docs/), Databricks,
or manual Spark installations.

Compared to Google Pub/Sub, Pub/Sub Lite provides partitioned zonal data
storage with predefined capacity. Both products present a similar API, but
Pub/Sub Lite has more usage caveats.

See the [Google Pub/Sub Lite docs](https://cloud.google.com/pubsub/quickstart-console#before-you-begin) for more details on how to activate
Pub/Sub Lite for your project, as well as guidance on how to choose between
Cloud Pub/Sub and Pub/Sub Lite.

See the [Pub/Sub Lite Spark Connector client library docs][javadocs] to learn how to
use this Pub/Sub Lite Spark Connector Client Library.


## Requirements
Expand Down Expand Up @@ -238,8 +231,8 @@ Java is a registered trademark of Oracle and/or its affiliates.
[kokoro-badge-image-5]: http://storage.googleapis.com/cloud-devrel-public/java/badges/java-pubsublite-spark/java11.svg
[kokoro-badge-link-5]: http://storage.googleapis.com/cloud-devrel-public/java/badges/java-pubsublite-spark/java11.html
[stability-image]: https://img.shields.io/badge/stability-alpha-orange
[maven-version-image]: https://img.shields.io/maven-central/v/com.google.cloud/pubsublite-spark.svg
[maven-version-link]: https://search.maven.org/search?q=g:com.google.cloud%20AND%20a:pubsublite-spark&core=gav
[maven-version-image]: https://img.shields.io/maven-central/v/com.google.cloud/pubsublite-spark-sql-streaming.svg
[maven-version-link]: https://search.maven.org/search?q=g:com.google.cloud%20AND%20a:pubsublite-spark-sql-streaming&core=gav
[authentication]: https://github.com/googleapis/google-cloud-java#authentication
[developer-console]: https://console.developers.google.com/
[create-project]: https://cloud.google.com/resource-manager/docs/creating-managing-projects
Expand Down

0 comments on commit 6ecde06

Please sign in to comment.