Skip to content

DQ data storage on S3 only? #10

Answered by PMRocha
jaina15 asked this question in Q&A
Jan 9, 2024 · 1 comments · 5 replies
Discussion options

You must be logged in to vote

Hello @jaina15 ,
was the table created before running this process?
You can enable the automatic schema update by setting spark.databricks.delta.schema.autoMerge.enabled to true in your environment. To do this, you can add exec_env at the end of your acon:

    "exec_env": {
        "spark.databricks.delta.schema.autoMerge.enabled": True,
    },

Regarding the dq errors, it really depends on your use case. You can filter these lines with no name, or fail the process if it is critical.

Replies: 1 comment 5 replies

Comment options

You must be logged in to vote
5 replies
@jaina15
Comment options

@PMRocha
Comment options

@jaina15
Comment options

@PMRocha
Comment options

Answer selected by jaina15
@jaina15
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants