Skip to content

Commit

Permalink
docs: Add sample common issues. (#88)
Browse files Browse the repository at this point in the history
* update

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* update

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

* update

* Update samples/README.md

Co-authored-by: Tianzi Cai <tianzi@google.com>

Co-authored-by: Tianzi Cai <tianzi@google.com>
  • Loading branch information
jiangmichaellll and anguillanneuf committed Feb 23, 2021
1 parent 103e1eb commit 5726724
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 8 deletions.
11 changes: 4 additions & 7 deletions .readme-partials.yaml
Expand Up @@ -19,13 +19,10 @@ custom_content: |
<!--- TODO(jiangmichael): Release on Maven Central and add Maven Central link -->
The connector will be available from the Maven Central repository. It can be used using the `--packages` option or the `spark.jars.packages` configuration property.
<!--
| Scala version | Connector Artifact |
| --- | --- |
| Scala 2.11 | `com.google.cloud.pubsublite.spark:pubsublite-spark-sql-streaming:0.1.0:with-dependencies` |
-->
<!--- TODO(jiangmichael): Add exmaple code and brief description here -->
## Compatibility
| Connector version | Spark version |
| --- | --- |
| 0.1.0 | 2.4.X |
## Usage
Expand Down
12 changes: 11 additions & 1 deletion samples/README.md
Expand Up @@ -87,4 +87,14 @@ To run the word count sample in Dataproc cluster, follow the steps:
3. Delete Dataproc cluster.
```sh
gcloud dataproc clusters delete $CLUSTER_NAME --region=$REGION
```
```

## Common issues
1. Permission not granted. <br>
This could happen when creating a topic and a subscription, or submitting a job to your Dataproc cluster.
Make sure your service account has at least `Editor` permissions for Pub/Sub Lite and Dataproc.
Your Dataproc cluster needs `scope=cloud-platform` to access other services and resources within the same project.
Your `gcloud` and `GOOGLE_APPLICATION_CREDENTIALS` should access the same project. Check out which project your `gcloud` and `gstuil` commands use with `gcloud config get-value project`.

2. Your Dataproc job fails with `ClassNotFound` or similar exceptions. <br>
Make sure your Dataproc cluster uses images of [supported Spark versions](https://github.com/googleapis/java-pubsublite-spark#compatibility).

0 comments on commit 5726724

Please sign in to comment.