Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom environment variables provided in Kubernetes spark job is not getting picked up #2017

Open
focode opened this issue May 8, 2024 · 2 comments

Comments

@focode
Copy link

focode commented May 8, 2024

This is yaml of my spark job:
kind: SparkApplication metadata: name: operatordc1 namespace: spark spec: type: Java mode: cluster image: "xiotxpcdevcr.azurecr.io/spark-custom:release-8.0" imagePullPolicy: Always imagePullSecrets: - mdsp-secret-spark mainClass: "org.springframework.boot.loader.JarLauncher" mainApplicationFile: "local:///opt/spark/examples/jars/operatordc1.jar" # Ensure this is the correct path within your Docker image sparkVersion: "3.4.2" restartPolicy: type: Never driver: env: - name: spring.profiles.active value: azure,secured cores: 4 coreLimit: "4000m" memory: "4096m" javaOptions: >- -Dlog4j.configuration=file:///log4j2.xml --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=32M -XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m -Xms1024m -Dlog4j.debug labels: version: "3.4.2" serviceAccount: spark executor: cores: 4 instances: 1 memory: "4096m" javaOptions: >- -Dlog4j.configuration=file:///log4j2.xml -XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -Dlog4j.debug labels: version: "3.4.2" serviceAccount: spark env: - name: spring.profiles.active value: "azure,secured" sparkConf: "spark.driver.userClassPathFirst": "true" "spark.executor.userClassPathFirst": "true" "spark.driver.memory": "4096m" "spark.executor.memory": "4096m" "spark.dynamicAllocation.enabled": "true"

when I am describing the pod , I am getting only spark operator provided env values :

Environment:
SPARK_USER: root
SPARK_APPLICATION_ID: spark-699c7647354544e293cc2c12cda9e88e
SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
SPARK_LOCAL_DIRS: /var/data/spark-c6e072fb-2e09-4a07-8c58-0365eda4f362
SPARK_CONF_DIR: /opt/spark/conf

It is missing:
name: spring.profiles.active
value: "azure,secured"

@SamBird
Copy link

SamBird commented May 24, 2024

I take it you are using the Webhook?

I've observed the same behaviour recently. I believe the Mutating webhook injects these.

What's in your Operator logs?

Are you seeing TLS handshake errors to the K8s API server?

@imtzer
Copy link

imtzer commented Jun 3, 2024

Same as Unable to assign environment variables, check your webhook first @focode

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants