diff --git a/.readme-partials.yaml b/.readme-partials.yaml
index 2809bbcc..b627f59c 100644
--- a/.readme-partials.yaml
+++ b/.readme-partials.yaml
@@ -19,13 +19,10 @@ custom_content: |
The connector will be available from the Maven Central repository. It can be used using the `--packages` option or the `spark.jars.packages` configuration property.
-
-
-
+ ## Compatibility
+ | Connector version | Spark version |
+ | --- | --- |
+ | 0.1.0 | 2.4.X |
## Usage
diff --git a/samples/README.md b/samples/README.md
index cdd24b3b..a8747527 100644
--- a/samples/README.md
+++ b/samples/README.md
@@ -87,4 +87,14 @@ To run the word count sample in Dataproc cluster, follow the steps:
3. Delete Dataproc cluster.
```sh
gcloud dataproc clusters delete $CLUSTER_NAME --region=$REGION
- ```
\ No newline at end of file
+ ```
+
+## Common issues
+1. Permission not granted.
+ This could happen when creating a topic and a subscription, or submitting a job to your Dataproc cluster.
+ Make sure your service account has at least `Editor` permissions for Pub/Sub Lite and Dataproc.
+ Your Dataproc cluster needs `scope=cloud-platform` to access other services and resources within the same project.
+ Your `gcloud` and `GOOGLE_APPLICATION_CREDENTIALS` should access the same project. Check out which project your `gcloud` and `gstuil` commands use with `gcloud config get-value project`.
+
+2. Your Dataproc job fails with `ClassNotFound` or similar exceptions.
+ Make sure your Dataproc cluster uses images of [supported Spark versions](https://github.com/googleapis/java-pubsublite-spark#compatibility).